Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0424HMA1
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8087 | 0.09 | 10 | 0.4267 |
| 0.1997 | 0.18 | 20 | 0.1216 |
| 0.1165 | 0.27 | 30 | 0.1011 |
| 0.1054 | 0.36 | 40 | 0.0817 |
| 0.0815 | 0.45 | 50 | 0.0762 |
| 0.0841 | 0.54 | 60 | 0.0754 |
| 0.0786 | 0.63 | 70 | 0.0745 |
| 0.0784 | 0.73 | 80 | 0.0791 |
| 0.0807 | 0.82 | 90 | 0.0749 |
| 0.0805 | 0.91 | 100 | 0.0719 |
| 0.0745 | 1.0 | 110 | 0.0650 |
| 0.0631 | 1.09 | 120 | 0.0713 |
| 0.0667 | 1.18 | 130 | 0.0695 |
| 0.081 | 1.27 | 140 | 0.0714 |
| 0.0767 | 1.36 | 150 | 0.2465 |
| 0.1068 | 1.45 | 160 | 0.0718 |
| 0.075 | 1.54 | 170 | 0.0747 |
| 0.0811 | 1.63 | 180 | 0.0799 |
| 0.0687 | 1.72 | 190 | 0.0782 |
| 0.0818 | 1.81 | 200 | 0.0683 |
| 0.0593 | 1.9 | 210 | 0.0581 |
| 0.0514 | 1.99 | 220 | 0.0412 |
| 0.0267 | 2.08 | 230 | 0.0364 |
| 0.0232 | 2.18 | 240 | 0.0324 |
| 0.0166 | 2.27 | 250 | 0.0206 |
| 0.0274 | 2.36 | 260 | 0.0288 |
| 0.0182 | 2.45 | 270 | 0.0189 |
| 0.0153 | 2.54 | 280 | 0.0169 |
| 0.0112 | 2.63 | 290 | 0.0143 |
| 0.011 | 2.72 | 300 | 0.0142 |
| 0.0137 | 2.81 | 310 | 0.0140 |
| 0.0092 | 2.9 | 320 | 0.0138 |
| 0.0104 | 2.99 | 330 | 0.0138 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA1", "results": []}]}
|
Litzy619/V0424HMA1
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-24T09:28:06+00:00
|
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** saiful2024
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
|
saiful2024/llama3-8b-oig-unsloth-merged
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:28:26+00:00
|
text-generation
|
transformers
|
{}
|
maddi99/blm_g4_GPTQ
| null |
[
"transformers",
"bloom",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-24T09:28:34+00:00
|
|
feature-extraction
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
MaiiaCompsolutions/multiclass_id2label_finance
| null |
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:28:57+00:00
|
null | null |
{"license": "unknown"}
|
harperlilian21/Ron_Weasley
| null |
[
"license:unknown",
"region:us"
] | null |
2024-04-24T09:29:23+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["llama-factory"]}
|
zjj815/codeqwen-7b-schema
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:29:48+00:00
|
null | null |
{}
|
Heyzews/jinora-gpt2-medium-full-plus
| null |
[
"safetensors",
"region:us"
] | null |
2024-04-24T09:29:53+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["llama-factory"]}
|
zjj815/codeqwen-7b-sql
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:30:39+00:00
|
null | null |
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- **Repository:** [InternVideo2](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2)
- **Paper:** [2403.15377](https://arxiv.org/abs/2403.15377)
- **Point of Contact:** mailto:[InternVideo Group]([email protected])
## Citation
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{wang2024internvideo2,
title={InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
@article{wang2022internvideo,
title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning},
author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2212.03191},
year={2022}
}
```
|
{"license": "apache-2.0", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "Country": "text", "E-Mail": "text"}}
|
OpenGVLab/InternVideo2-Stage1-1B-224p-f8-k710
| null |
[
"arxiv:2403.15377",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T09:30:43+00:00
|
null | null |
{}
|
JuanLyons/MaskedLM
| null |
[
"region:us"
] | null |
2024-04-24T09:31:12+00:00
|
|
text-generation
|
transformers
|
{}
|
titanbot/mpt-7b-instruct-8k-4bit-AWQ
| null |
[
"transformers",
"mpt",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:31:51+00:00
|
|
null | null |
model: runwayml/stable-diffusion-v1-5
|
{}
|
trongg/cr7
| null |
[
"region:us"
] | null |
2024-04-24T09:32:18+00:00
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0424HMA2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9738 | 0.09 | 10 | 0.6274 |
| 0.2393 | 0.18 | 20 | 0.1219 |
| 0.1178 | 0.27 | 30 | 0.0941 |
| 0.0994 | 0.36 | 40 | 0.0758 |
| 0.0776 | 0.45 | 50 | 0.0772 |
| 0.0858 | 0.54 | 60 | 0.0728 |
| 0.0808 | 0.63 | 70 | 0.0750 |
| 0.0838 | 0.73 | 80 | 0.0829 |
| 0.0885 | 0.82 | 90 | 0.0693 |
| 0.0925 | 0.91 | 100 | 0.0701 |
| 0.0917 | 1.0 | 110 | 0.0651 |
| 0.0645 | 1.09 | 120 | 0.0766 |
| 0.0767 | 1.18 | 130 | 0.0721 |
| 0.0695 | 1.27 | 140 | 0.0660 |
| 0.0653 | 1.36 | 150 | 0.0686 |
| 0.0633 | 1.45 | 160 | 0.0672 |
| 0.0614 | 1.54 | 170 | 0.0607 |
| 0.0643 | 1.63 | 180 | 0.0608 |
| 0.0579 | 1.72 | 190 | 0.0618 |
| 0.0658 | 1.81 | 200 | 0.0599 |
| 0.0503 | 1.9 | 210 | 0.0628 |
| 0.0514 | 1.99 | 220 | 0.0590 |
| 0.0358 | 2.08 | 230 | 0.0615 |
| 0.0306 | 2.18 | 240 | 0.0660 |
| 0.0262 | 2.27 | 250 | 0.0593 |
| 0.0249 | 2.36 | 260 | 0.0555 |
| 0.025 | 2.45 | 270 | 0.0535 |
| 0.0233 | 2.54 | 280 | 0.0512 |
| 0.0196 | 2.63 | 290 | 0.0508 |
| 0.0204 | 2.72 | 300 | 0.0503 |
| 0.0226 | 2.81 | 310 | 0.0499 |
| 0.0199 | 2.9 | 320 | 0.0499 |
| 0.0189 | 2.99 | 330 | 0.0500 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA2", "results": []}]}
|
Litzy619/V0424HMA2
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-24T09:32:18+00:00
|
null | null |
{}
|
mateotfuentes/my_awesome_opus_books_model
| null |
[
"region:us"
] | null |
2024-04-24T09:32:33+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_5K
This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/google-bert/bert-large-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-bert/bert-large-cased", "model-index": [{"name": "results_5K", "results": []}]}
|
Elkelouizajo/bert_mnli_5K
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:32:54+00:00
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0424HMA3
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8524 | 0.09 | 10 | 0.4537 |
| 0.1997 | 0.18 | 20 | 0.1136 |
| 0.113 | 0.27 | 30 | 0.0908 |
| 0.0995 | 0.36 | 40 | 0.0755 |
| 0.0777 | 0.45 | 50 | 0.0740 |
| 0.0815 | 0.54 | 60 | 0.0752 |
| 0.0785 | 0.63 | 70 | 0.0753 |
| 0.0849 | 0.73 | 80 | 0.0838 |
| 0.0878 | 0.82 | 90 | 0.0910 |
| 0.0853 | 0.91 | 100 | 0.0737 |
| 0.0807 | 1.0 | 110 | 0.0721 |
| 0.067 | 1.09 | 120 | 0.0745 |
| 0.0718 | 1.18 | 130 | 0.0849 |
| 0.0677 | 1.27 | 140 | 0.0658 |
| 0.0693 | 1.36 | 150 | 0.0678 |
| 0.0711 | 1.45 | 160 | 0.0712 |
| 0.068 | 1.54 | 170 | 0.0707 |
| 0.0687 | 1.63 | 180 | 0.0709 |
| 0.0597 | 1.72 | 190 | 0.0673 |
| 0.065 | 1.81 | 200 | 0.0702 |
| 0.0576 | 1.9 | 210 | 0.0699 |
| 0.0535 | 1.99 | 220 | 0.0610 |
| 0.0382 | 2.08 | 230 | 0.0712 |
| 0.0367 | 2.18 | 240 | 0.0693 |
| 0.0307 | 2.27 | 250 | 0.0662 |
| 0.0311 | 2.36 | 260 | 0.0800 |
| 0.0422 | 2.45 | 270 | 0.0673 |
| 0.0352 | 2.54 | 280 | 0.0661 |
| 0.0305 | 2.63 | 290 | 0.0681 |
| 0.0352 | 2.72 | 300 | 0.0671 |
| 0.0337 | 2.81 | 310 | 0.0672 |
| 0.0333 | 2.9 | 320 | 0.0669 |
| 0.0354 | 2.99 | 330 | 0.0669 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA3", "results": []}]}
|
Litzy619/V0424HMA3
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-24T09:33:59+00:00
|
text-generation
|
transformers
|
# Llama-3-Mistral-v0.2-Instruct-slerp
Llama-3-Mistral-v0.2-Instruct-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range:
- 0
- 32
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range:
- 0
- 32
merge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "varox34/Llama-3-Mistral-v0.2-Instruct-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "mit", "tags": ["merge", "mergekit", "lazymergekit", "meta-llama/Meta-Llama-3-8B-Instruct", "mistralai/Mistral-7B-Instruct-v0.2"], "base_model": ["meta-llama/Meta-Llama-3-8B-Instruct", "mistralai/Mistral-7B-Instruct-v0.2"]}
|
varox34/Llama-3-Mistral-v0.2-Instruct-slerp
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"meta-llama/Meta-Llama-3-8B-Instruct",
"mistralai/Mistral-7B-Instruct-v0.2",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:34:42+00:00
|
null | null |
{}
|
Anu542001/whisper-small-ananya
| null |
[
"region:us"
] | null |
2024-04-24T09:34:44+00:00
|
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0424HMA4
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7259 | 0.09 | 10 | 0.1501 |
| 0.1625 | 0.18 | 20 | 0.1179 |
| 0.1147 | 0.27 | 30 | 0.0946 |
| 0.1 | 0.36 | 40 | 0.0843 |
| 0.0845 | 0.45 | 50 | 0.0766 |
| 0.0932 | 0.54 | 60 | 0.0825 |
| 0.0859 | 0.63 | 70 | 0.0738 |
| 0.0804 | 0.73 | 80 | 0.0825 |
| 0.0886 | 0.82 | 90 | 0.0815 |
| 0.305 | 0.91 | 100 | 0.1126 |
| 0.1279 | 1.0 | 110 | 0.1019 |
| 0.6008 | 1.09 | 120 | 0.2538 |
| 1.0009 | 1.18 | 130 | 0.2666 |
| 0.2339 | 1.27 | 140 | 0.1617 |
| 0.2166 | 1.36 | 150 | 0.1888 |
| 0.1717 | 1.45 | 160 | 0.1619 |
| 0.166 | 1.54 | 170 | 0.1569 |
| 0.1608 | 1.63 | 180 | 0.1716 |
| 0.155 | 1.72 | 190 | 0.1543 |
| 0.1524 | 1.81 | 200 | 0.1611 |
| 0.1564 | 1.9 | 210 | 0.1490 |
| 0.1544 | 1.99 | 220 | 0.1514 |
| 0.1581 | 2.08 | 230 | 0.1557 |
| 0.1521 | 2.18 | 240 | 0.1514 |
| 0.1517 | 2.27 | 250 | 0.1486 |
| 0.1534 | 2.36 | 260 | 0.1484 |
| 0.1499 | 2.45 | 270 | 0.1484 |
| 0.1484 | 2.54 | 280 | 0.1466 |
| 0.1475 | 2.63 | 290 | 0.1485 |
| 0.15 | 2.72 | 300 | 0.1467 |
| 0.1495 | 2.81 | 310 | 0.1470 |
| 0.1488 | 2.9 | 320 | 0.1471 |
| 0.1482 | 2.99 | 330 | 0.1475 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA4", "results": []}]}
|
Litzy619/V0424HMA4
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-24T09:34:59+00:00
|
text-generation
|
transformers
|
# Medical-Llama3-8B-GPTQ
[](https://ruslanmv.com/)
This is a fine-tuned version of the Llama3 8B model, specifically designed to answer medical questions.
The model was trained on the AI Medical Chatbot dataset, which can be found at [ruslanmv/ai-medical-chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot).
This fine-tuned model leverages technique GPTQ for efficient inference with 4-bit quantization.
GPTQ is a technique for compressing deep learning model weights through a 4-bit quantization process that targets efficient GPU inference.
This approach aims to reduce model size by converting weights to a 4-bit representation while controlling error. For better performance during inference,
GPTQ dynamically restores the weights to float16, balancing the benefits of reduced memory usage with computational efficiency.
**Model:** [ruslanmv/Medical-Llama3-8B-GPTQ](https://huggingface.co/ruslanmv/Medical-Llama3-8B-GPTQ)
- **Developed by:** ruslanmv
- **License:** apache-2.0
- **Finetuned from model:** meta-llama/Meta-Llama-3-8B
## Installation
**Prerequisites:**
- A system with CUDA support is highly recommended for optimal performance.
- Python 3.10 or later
**Installation Steps:**
1. **Install required Python libraries:**
```bash
pip install transformers==4.40.0
```
## Usage
Here's an example of how to use the Medical-Llama3-8B-GPTQ model to generate an answer to a medical question:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import json
device = "cuda:0" if torch.cuda.is_available() else "cpu"
repo_id = "ruslanmv/Medical-Llama3-8B-GPTQ"
# download quantized model from Hugging Face Hub and load to the first GPU
model = AutoGPTQForCausalLM.from_quantized(repo_id,
device=device,
use_safetensors=True,
use_triton=False)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
def create_prompt(user_query):
B_INST, E_INST = "<s>[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = """\
You are an AI Medical Chatbot Assistant, I aim to provide comprehensive and informative responses to your inquiries. However, please note that while I strive for accuracy, my responses should not replace professional medical advice and short answers.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
SYSTEM_PROMPT = B_SYS + DEFAULT_SYSTEM_PROMPT + E_SYS
instruction = f"User asks: {user_query}\n"
prompt = B_INST + SYSTEM_PROMPT + instruction + E_INST
return prompt.strip()
def generate_text(model, tokenizer, prompt,
max_length=200,
temperature=0.7,
num_return_sequences=1):
prompt = create_prompt(user_query)
# Tokenize the prompt
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(device) # Move input_ids to the same device as the model
# Generate text
output = model.generate(
input_ids=input_ids,
max_length=max_length,
temperature=temperature,
num_return_sequences=num_return_sequences,
pad_token_id=tokenizer.eos_token_id, # Set pad token to end of sequence token
do_sample=True
)
# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
# Split the generated text based on the prompt and take the portion after it
generated_text = generated_text.split(prompt)[-1].strip()
return generated_text
```
## Inference Example
This section showcases how to use the model for inference.
**User Query:**
```
user_query = "I'm a 35-year-old male experiencing symptoms like fatigue, increased sensitivity to cold, and dry, itchy skin. Could these be indicative of hypothyroidism?"
```
**Answer:**
```
generated_text = generate_text(model, tokenizer, user_query)
print(generated_text)
```
You will get
```
I understand your concern. It could be attributed to hypothyroidism. You may also have perifollicular inflammation. I suggest you to get your thyroid profile done to rule out hypothyroidism. I would also suggest you to use a mild moisturizing cream, with sunscreen, to
```
## License
This model is licensed under the Apache License 2.0. You can find the full license in the LICENSE file.
|
{"language": "en", "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "ruslanmv", "llama", "trl"], "datasets": ["ruslanmv/ai-medical-chatbot"], "base_model": "meta-llama/Meta-Llama-3-8B"}
|
ruslanmv/Medical-Llama3-8B-GPTQ
| null |
[
"transformers",
"llama",
"text-generation",
"text-generation-inference",
"ruslanmv",
"trl",
"en",
"dataset:ruslanmv/ai-medical-chatbot",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null |
2024-04-24T09:35:09+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Classifier_30k
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1296
- Accuracy: 0.9876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 0.3588 | 0.9994 | 831 | 0.3084 | 0.9091 |
| 0.1252 | 2.0 | 1663 | 0.2260 | 0.9453 |
| 0.1123 | 2.9994 | 2494 | 0.1241 | 0.9604 |
| 0.0896 | 4.0 | 3326 | 0.1372 | 0.9655 |
| 0.0749 | 4.9994 | 4157 | 0.1541 | 0.9708 |
| 0.0743 | 6.0 | 4989 | 0.1127 | 0.9715 |
| 0.0596 | 6.9994 | 5820 | 0.1782 | 0.9672 |
| 0.0494 | 8.0 | 6652 | 0.1352 | 0.9749 |
| 0.0443 | 8.9994 | 7483 | 0.1232 | 0.9681 |
| 0.0405 | 10.0 | 8315 | 0.0756 | 0.9838 |
| 0.0383 | 10.9994 | 9146 | 0.2025 | 0.9600 |
| 0.0361 | 12.0 | 9978 | 0.1130 | 0.9796 |
| 0.0288 | 12.9994 | 10809 | 0.0906 | 0.9855 |
| 0.0249 | 14.0 | 11641 | 0.1122 | 0.9827 |
| 0.0222 | 14.9994 | 12472 | 0.0713 | 0.9862 |
| 0.0239 | 16.0 | 13304 | 0.0552 | 0.9876 |
| 0.0234 | 16.9994 | 14135 | 0.0728 | 0.9864 |
| 0.0258 | 18.0 | 14967 | 0.0558 | 0.9891 |
| 0.0208 | 18.9994 | 15798 | 0.0715 | 0.9879 |
| 0.0199 | 20.0 | 16630 | 0.0753 | 0.9885 |
| 0.0143 | 20.9994 | 17461 | 0.0812 | 0.9872 |
| 0.0255 | 22.0 | 18293 | 0.1661 | 0.9744 |
| 0.0156 | 22.9994 | 19124 | 0.0751 | 0.9883 |
| 0.013 | 24.0 | 19956 | 0.0718 | 0.9862 |
| 0.0126 | 24.9994 | 20787 | 0.0829 | 0.9853 |
| 0.0123 | 26.0 | 21619 | 0.0848 | 0.9857 |
| 0.0109 | 26.9994 | 22450 | 0.0913 | 0.9864 |
| 0.0095 | 28.0 | 23282 | 0.1607 | 0.9774 |
| 0.0096 | 28.9994 | 24113 | 0.0958 | 0.9853 |
| 0.0074 | 30.0 | 24945 | 0.1264 | 0.9857 |
| 0.0091 | 30.9994 | 25776 | 0.1030 | 0.9881 |
| 0.0096 | 32.0 | 26608 | 0.0954 | 0.9879 |
| 0.0074 | 32.9994 | 27439 | 0.1103 | 0.9885 |
| 0.0067 | 34.0 | 28271 | 0.1803 | 0.9791 |
| 0.0044 | 34.9994 | 29102 | 0.1597 | 0.9817 |
| 0.0045 | 36.0 | 29934 | 0.0878 | 0.9894 |
| 0.0034 | 36.9994 | 30765 | 0.1680 | 0.9806 |
| 0.0066 | 38.0 | 31597 | 0.1114 | 0.9870 |
| 0.0041 | 38.9994 | 32428 | 0.0910 | 0.9896 |
| 0.0043 | 40.0 | 33260 | 0.1435 | 0.9840 |
| 0.0037 | 40.9994 | 34091 | 0.1233 | 0.9881 |
| 0.0046 | 42.0 | 34923 | 0.1347 | 0.9864 |
| 0.0029 | 42.9994 | 35754 | 0.1134 | 0.9883 |
| 0.0017 | 44.0 | 36586 | 0.1125 | 0.9879 |
| 0.0025 | 44.9994 | 37417 | 0.1400 | 0.9859 |
| 0.0023 | 46.0 | 38249 | 0.1228 | 0.9879 |
| 0.0017 | 46.9994 | 39080 | 0.1445 | 0.9862 |
| 0.0011 | 48.0 | 39912 | 0.1375 | 0.9876 |
| 0.0013 | 48.9994 | 40743 | 0.1323 | 0.9876 |
| 0.0021 | 49.9699 | 41550 | 0.1296 | 0.9876 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "Classifier_30k", "results": []}]}
|
Tensorride/Classifier_30k
| null |
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:35:58+00:00
|
text-generation
|
transformers
|
# EMO-phi-128k
EMO-phi-128k is an emotional intelligence conversational AI model fine-tuned from Microsoft's Phi-3-mini-128k-instruct model. It is designed to engage in open-ended dialogue while exhibiting emotional understanding and emotional intelligence capabilities.
## Model Details
- **Developer**: OEvortex
- **Model Type**: Transformer-based language model
- **Language**: English
- **License**: MIT
- **Base Model**: microsoft/Phi-3-mini-128k-instruct
## Model Description
EMO-phi-128k is a 128k parameter conversational AI model that has been fine-tuned to incorporate emotional intelligence and emotional understanding capabilities. It aims to engage in emotionally aware and contextual dialogue by recognizing and responding appropriately to the emotional tones and sentiments expressed by the user.
While inheriting the strong language understanding and generation capabilities of its base model, EMO-phi-128k has been specifically optimized for emotional intelligence tasks through additional fine-tuning on emotional dialogue datasets.
## Intended Uses
- Emotional Support / Conversational Companion
- Customer Service Chatbots (with emotional intelligence)
- Creative Writing Assistance (with emotional awareness)
- Psychological/Therapeutic Applications
## Limitations and Risks
As an AI system, EMO-phi-128k may exhibit biases present in its training data. Its true emotional intelligence capabilities are not fully known or verified. The model should be used with caution, especially in sensitive or high-stakes applications involving mental health, therapy, or counseling. Proper human oversight is recommended.
Additionally, like all language models, EMO-phi-128k is susceptible to generating harmful, biased, or explicit content if prompted in an unsafe manner. Safety considerations should be taken into account when deploying or interacting with the model.
## How to Use
You can load and use the EMO-phi-128k model with the Transformers library in Python:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"OEvortex/EMO-phi-128k",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
messages = [
{"role": "system", "content": "You are a helpful Emotional intelligence named as EMO-phi, remember to always answer users question in EMO style."},
{"role": "user", "content": "My best friend recently lost their parent to cancer after a long battle. They are understandably devastated and struggling with grief."},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.6,
"do_sample": True,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
|
{"license": "mit", "library_name": "transformers", "tags": ["EMO"], "pipeline_tag": "text-generation", "base_model": "microsoft/Phi-3-mini-128k-instruct"}
|
OEvortex/EMO-phi-128k
| null |
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"EMO",
"conversational",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:36:20+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/llama-9
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:37:34+00:00
|
null | null |
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- **Repository:** [InternVideo2](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2)
- **Paper:** [2403.15377](https://arxiv.org/abs/2403.15377)
- **Point of Contact:** mailto:[InternVideo Group]([email protected])
## Citation
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{wang2024internvideo2,
title={InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
@article{wang2022internvideo,
title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning},
author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2212.03191},
year={2022}
}
```
|
{"license": "apache-2.0", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "Country": "text", "E-Mail": "text"}}
|
OpenGVLab/InternVideo2-Stage1-1B-224p-K400
| null |
[
"arxiv:2403.15377",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T09:37:37+00:00
|
null | null |
{}
|
Eunhyecos/Wwwwwwwww
| null |
[
"region:us"
] | null |
2024-04-24T09:39:27+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
notbdq/test72173
| null |
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:40:12+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
nmdr/Gemma-1.1-2b-it-Platypus-2k-The-Iron-Giant10
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:40:38+00:00
|
null | null |
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- **Repository:** [InternVideo2](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2)
- **Paper:** [2403.15377](https://arxiv.org/abs/2403.15377)
- **Point of Contact:** mailto:[InternVideo Group]([email protected])
## Citation
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{wang2024internvideo2,
title={InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
@article{wang2022internvideo,
title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning},
author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2212.03191},
year={2022}
}
```
|
{"license": "apache-2.0", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "Country": "text", "E-Mail": "text"}}
|
OpenGVLab/InternVideo2-Stage1-1B-224p-K600
| null |
[
"arxiv:2403.15377",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T09:41:29+00:00
|
null | null |
{}
|
Eunhyecos/Taooo
| null |
[
"region:us"
] | null |
2024-04-24T09:41:48+00:00
|
|
null | null |
{}
|
Snip675/GAWR
| null |
[
"region:us"
] | null |
2024-04-24T09:41:55+00:00
|
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0424HMA5
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6866 | 0.09 | 10 | 0.1565 |
| 0.1493 | 0.18 | 20 | 0.1091 |
| 0.109 | 0.27 | 30 | 0.1005 |
| 0.1556 | 0.36 | 40 | 0.1006 |
| 0.0904 | 0.45 | 50 | 0.0750 |
| 0.0874 | 0.54 | 60 | 0.0702 |
| 0.0796 | 0.63 | 70 | 0.0750 |
| 0.1471 | 0.73 | 80 | 0.0761 |
| 0.0894 | 0.82 | 90 | 3.9310 |
| 1.6248 | 0.91 | 100 | 0.2083 |
| 0.3602 | 1.0 | 110 | 0.2208 |
| 0.7484 | 1.09 | 120 | 0.5896 |
| 0.2412 | 1.18 | 130 | 0.1540 |
| 0.1665 | 1.27 | 140 | 0.1771 |
| 0.1688 | 1.36 | 150 | 0.1605 |
| 0.1604 | 1.45 | 160 | 0.1586 |
| 0.1633 | 1.54 | 170 | 0.1550 |
| 0.1616 | 1.63 | 180 | 0.1554 |
| 0.1543 | 1.72 | 190 | 0.1481 |
| 0.149 | 1.81 | 200 | 0.1602 |
| 0.1544 | 1.9 | 210 | 0.1482 |
| 0.1535 | 1.99 | 220 | 0.1489 |
| 0.1529 | 2.08 | 230 | 0.1527 |
| 0.1473 | 2.18 | 240 | 0.1485 |
| 0.1469 | 2.27 | 250 | 0.1484 |
| 0.1489 | 2.36 | 260 | 0.1483 |
| 0.1469 | 2.45 | 270 | 0.1478 |
| 0.1459 | 2.54 | 280 | 0.1466 |
| 0.1461 | 2.63 | 290 | 0.1473 |
| 0.1483 | 2.72 | 300 | 0.1467 |
| 0.1471 | 2.81 | 310 | 0.1468 |
| 0.1475 | 2.9 | 320 | 0.1466 |
| 0.1474 | 2.99 | 330 | 0.1467 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA5", "results": []}]}
|
Litzy619/V0424HMA5
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-24T09:42:22+00:00
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0424HMA6
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6303 | 0.09 | 10 | 0.1548 |
| 0.1532 | 0.18 | 20 | 0.1184 |
| 0.1168 | 0.27 | 30 | 0.0949 |
| 0.2198 | 0.36 | 40 | 0.0971 |
| 0.0993 | 0.45 | 50 | 0.0863 |
| 0.0947 | 0.54 | 60 | 0.0737 |
| 0.0782 | 0.63 | 70 | 0.0718 |
| 0.086 | 0.73 | 80 | 0.0760 |
| 0.0877 | 0.82 | 90 | 0.0852 |
| 0.1259 | 0.91 | 100 | 0.0977 |
| 0.4521 | 1.0 | 110 | 0.4505 |
| 2.192 | 1.09 | 120 | 0.3481 |
| 0.98 | 1.18 | 130 | 0.3816 |
| 0.2241 | 1.27 | 140 | 0.1646 |
| 0.2434 | 1.36 | 150 | 0.3400 |
| 0.5348 | 1.45 | 160 | 0.1840 |
| 0.1717 | 1.54 | 170 | 0.1585 |
| 0.1724 | 1.63 | 180 | 0.1613 |
| 0.1633 | 1.72 | 190 | 0.1531 |
| 0.1621 | 1.81 | 200 | 0.1668 |
| 0.1661 | 1.9 | 210 | 0.1782 |
| 0.1755 | 1.99 | 220 | 0.1592 |
| 0.1634 | 2.08 | 230 | 0.1635 |
| 0.1579 | 2.18 | 240 | 0.1507 |
| 0.1506 | 2.27 | 250 | 0.1524 |
| 0.1537 | 2.36 | 260 | 0.1487 |
| 0.1479 | 2.45 | 270 | 0.1506 |
| 0.1492 | 2.54 | 280 | 0.1469 |
| 0.1482 | 2.63 | 290 | 0.1489 |
| 0.1511 | 2.72 | 300 | 0.1479 |
| 0.1486 | 2.81 | 310 | 0.1476 |
| 0.149 | 2.9 | 320 | 0.1478 |
| 0.1495 | 2.99 | 330 | 0.1480 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA6", "results": []}]}
|
Litzy619/V0424HMA6
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-24T09:42:25+00:00
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0424HMA7
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9937 | 0.09 | 10 | 0.7945 |
| 0.3146 | 0.18 | 20 | 0.1240 |
| 0.1218 | 0.27 | 30 | 0.0958 |
| 0.1043 | 0.36 | 40 | 0.0855 |
| 0.0824 | 0.45 | 50 | 0.0869 |
| 0.0895 | 0.54 | 60 | 0.0761 |
| 0.0818 | 0.63 | 70 | 0.0730 |
| 0.0792 | 0.73 | 80 | 0.0759 |
| 0.0853 | 0.82 | 90 | 0.0727 |
| 0.0772 | 0.91 | 100 | 0.0718 |
| 0.0768 | 1.0 | 110 | 0.0681 |
| 0.0677 | 1.09 | 120 | 0.0725 |
| 0.073 | 1.18 | 130 | 0.0759 |
| 0.0676 | 1.27 | 140 | 0.0777 |
| 0.0703 | 1.36 | 150 | 0.0675 |
| 0.0804 | 1.45 | 160 | 0.0659 |
| 0.0639 | 1.54 | 170 | 0.0624 |
| 0.0681 | 1.63 | 180 | 0.0618 |
| 0.0613 | 1.72 | 190 | 0.0612 |
| 0.0672 | 1.81 | 200 | 0.0623 |
| 0.0572 | 1.9 | 210 | 0.0611 |
| 0.0526 | 1.99 | 220 | 0.0549 |
| 0.0384 | 2.08 | 230 | 0.0607 |
| 0.0378 | 2.18 | 240 | 0.0628 |
| 0.035 | 2.27 | 250 | 0.0581 |
| 0.0341 | 2.36 | 260 | 0.0544 |
| 0.0361 | 2.45 | 270 | 0.0526 |
| 0.0309 | 2.54 | 280 | 0.0504 |
| 0.0256 | 2.63 | 290 | 0.0501 |
| 0.0266 | 2.72 | 300 | 0.0498 |
| 0.0336 | 2.81 | 310 | 0.0482 |
| 0.0284 | 2.9 | 320 | 0.0478 |
| 0.0256 | 2.99 | 330 | 0.0477 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA7", "results": []}]}
|
Litzy619/V0424HMA7
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-24T09:43:23+00:00
|
text-generation
|
transformers
|
<div align="center">
<h1>Activation Beacon for Mistral</h1>
[<a href="https://arxiv.org/abs/2401.03462">Paper</a>] [<a href="https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon">Github</a>]
</div>
We apply [activation beacon](https://arxiv.org/abs/2401.03462) on [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). It is remarkable for the following features:
- **Effective**: strong performance on long-context tasks.
- **Efficient**: significantly lower memory usage & inference latency compared with full-attention models (you can easily run 128K context on a single A100 device).
- **Compatible**: a plug-in module to establish long-context capabilities for Mistral (we did not modify any parameters of the original Mistral model).
- **Low-Cost Training**: train with 2B tokens where all training samples are **less than 20K**.
Compared with [activation-beacon-llama2-7b-chat](https://huggingface.co/namespace-Pt/activation-beacon-llama2-7b-chat), there are three major differences:
- **Training Data**: we increase data for pretraining (2B tokens with 16384 sequence length on [slimpajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B)) and supervised finetuning (open-sourced long-context data as well as thousands of synthetic long-context QA data using GPT-4).
- **Sliding Window**: the window size is increased to 2048.
- **Condensing Ratio**: we train with condensing ratio of `[2,4,8,16,32]` during pretraining and `[2,4,8]` during finetuning. During both stages, we mix the condensing ratios with step-random strategy (see [paper](https://arxiv.org/abs/2401.03462) for detail).
# Evaluation
You can easily reproduce the following results following instructions [here](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon/new).
## [Needle in a Haystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack)
We evaluate the model on the Needle-In-A-HayStack task using the official setting.
<img src="data/needle.png"></img>
## [LongBench](https://arxiv.org/abs/2308.14508)
We evaluate the model on [LongBench](https://arxiv.org/abs/2308.14508) using 32K context length.
|Model|Single Doc QA|Multi Doc QA|Summarization|
|:-:|:-:|:-:|:-:|
|[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)|32.70|25.87|27.42|
|[Yarn-Mistral-128K](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|33.71|36.08|23.47|
|Activation-Beacon-Mistral-7B|39.14|43.27|29.52|
## [InfiniteBench](https://arxiv.org/pdf/2402.13718.pdf)
We evaluate the model on [InfiniteBench](https://arxiv.org/pdf/2402.13718.pdf) using 128K context length. The results of Yarn-Mistral-128K is copied from the [paper](https://arxiv.org/pdf/2402.13718.pdf).
|Model|LongBookQA Eng|LongBookSum Eng|
|:-:|:-:|:-:|
|[Yarn-Mistral-128K](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|9.55|9.09|
|Activation-Beacon-Mistral-7B|26.81|12.49|
## [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/)
We evaluate the model on [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) task with `[5,10,15,20,25,30,40,50,60,70]` topics.
<img src="data/topic.png"></img>
## [PG19 Perplexity](https://arxiv.org/abs/2309.12307)
We evaluate the sliding window perplexity on PG19 test set with window size 100K and stride 32K. We also report the latency and the GPU memory usage. For full-attention models, we enable [flash-attention-2](https://github.com/Dao-AILab/flash-attention) and [tensor parallel](https://github.com/BlackSamorez/tensor_parallel). The evaluation is run on 8xA800 machine.
|Model|Perplexity|Latency (s)|Memory (GB)|
|:-:|:-:|:-:|:-:|
|[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)|8.83|14.02|525.6 (cannot run on a single GPU)|
|[Yarn-Mistral-128K](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|7.66|14.56|525.6 (cannot run on a single GPU)|
|Activation-Beacon-Mistral-7B|8.16|3.06|27.4|
## [Passkey Retrieval](https://arxiv.org/abs/2309.12307)
We evaluate the model on [Passkey Retrieval](https://arxiv.org/abs/2309.12307) task using the official setting.
<img src="data/passkey.png"></img>
# Environment
```bash
torch>=2.1.1
transformers==4.39.3
```
# Usage
```python
import json
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "namespace-Pt/activation-beacon-mistral-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.bfloat16)
model = model.cuda().eval()
with torch.no_grad():
# short context
messages = [{"role": "user", "content": "Tell me about yourself."}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=50)
print(f"Input Length: {inputs['input_ids'].shape[1]}")
print(f"Output: {tokenizer.decode(outputs[0], skip_special_tokens=True)}")
# reset memory before new generation task
model.memory.reset()
# long context
with open("data/infbench.json", encoding="utf-8") as f:
example = json.load(f)
messages = [{"role": "user", "content": example["context"]}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**inputs, do_sample=False, top_p=1, temperature=1, max_new_tokens=20)[:, inputs["input_ids"].shape[1]:]
print("*"*20)
print(f"Input Length: {inputs['input_ids'].shape[1]}")
print(f"Answers: {example['answer']}")
print(f"Prediction: {tokenizer.decode(outputs[0], skip_special_tokens=True)}")
```
**NOTE**: It's okay to see warnings like `This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (32768). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.` Just ignore it.
|
{"license": "mit", "pipeline_tag": "text-generation"}
|
namespace-Pt/activation-beacon-mistral-7b
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"custom_code",
"arxiv:2401.03462",
"arxiv:2308.14508",
"arxiv:2402.13718",
"arxiv:2309.12307",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:43:33+00:00
|
null | null |
{}
|
Kishmat/test
| null |
[
"region:us"
] | null |
2024-04-24T09:44:04+00:00
|
|
text-generation
|
transformers
|
{}
|
titanbot/opt-30b-base-4bit-AWQ
| null |
[
"transformers",
"opt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:45:00+00:00
|
|
automatic-speech-recognition
|
transformers
|
# Model Description
Provide your detailed description here...
|
{"language": "is", "tags": ["automatic-speech-recognition", "wav2vec2", "WER"], "model-index": [{"name": "gudjonk93/wav2vec2-large-xlsr-53-female-over-49", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Samr\u00f3mur Millj\u00f3n, split=female_gt_49_yrs (Validation)", "type": "language-and-voice-lab/samromur_milljon", "split": "validation", "args": "male_18to49_yrs"}, "metrics": [{"type": "wer", "value": 12.6, "name": "WER"}]}]}]}
|
gudjonk93/wav2vec2-large-xlsr-53-female-over-49
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"WER",
"is",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:45:41+00:00
|
null | null |
{"license": "apache-2.0"}
|
priyasingh2109/code-llama-7b-text-to-sql
| null |
[
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T09:46:13+00:00
|
|
null | null |
{}
|
George1243/baichuan2-gguf
| null |
[
"region:us"
] | null |
2024-04-24T09:48:04+00:00
|
|
null | null |
{"license": "openrail"}
|
Coolwowsocoolwow/Coconut_Mall_Drums
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-24T09:49:17+00:00
|
|
null |
transformers
|
# Uploaded model
- **Developed by:** jurieyel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
|
jurieyel/text2sql-finetuned-llama3-8b-bnb-4bit_q4_k_m-gguf
| null |
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:49:26+00:00
|
null | null |
{}
|
AndreiUrsu/Face_Emotion_Clasification_VGG16_35
| null |
[
"region:us"
] | null |
2024-04-24T09:49:39+00:00
|
|
object-detection
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# solar_detection_resnet-50
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "solar_detection_resnet-50", "results": []}]}
|
michalszy888/solar_detection_resnet-50
| null |
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:50:24+00:00
|
text-generation
|
transformers
|
# jeiku/Average_Normie_l3_v1_8B AWQ
- Model creator: [jeiku](https://huggingface.co/jeiku)
- Original model: [Average_Normie_l3_v1_8B](https://huggingface.co/jeiku/Average_Normie_l3_v1_8B)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Average_Normie_l3_v1_8B-AWQ"
system_message = "You are Average_Normie_l3_v1_8B, incarnated as a powerful AI. You were created by jeiku."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
|
solidrust/Average_Normie_l3_v1_8B-AWQ
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:51:40+00:00
|
null | null |
{}
|
mateotfuentes/opus-mt-en-ROMANCE-finetuned-en-to-es
| null |
[
"region:us"
] | null |
2024-04-24T09:53:02+00:00
|
|
null |
transformers
|
{}
|
cstr/wmt21ct2_int8
| null |
[
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:53:26+00:00
|
|
object-detection
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/qubvel-hf-co/transformers-detection-model-finetuning-cppe5/runs/z4r0avog)
# facebook-detr-resnet-50-finetuned-10k-cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1507
- Map: 0.3346
- Map 50: 0.5981
- Map 75: 0.3283
- Map Small: 0.1903
- Map Medium: 0.2508
- Map Large: 0.4752
- Mar 1: 0.3914
- Mar 10: 0.5305
- Mar 100: 0.5425
- Mar Small: 0.2821
- Mar Medium: 0.4111
- Mar Large: 0.705
- Map Coverall: 0.5884
- Mar 100 Coverall: 0.7253
- Map Face Shield: 0.3168
- Mar 100 Face Shield: 0.5818
- Map Gloves: 0.2028
- Mar 100 Gloves: 0.3934
- Map Goggles: 0.2102
- Mar 100 Goggles: 0.558
- Map Mask: 0.3549
- Mar 100 Mask: 0.454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| 3.616 | 1.0 | 107 | 2.9380 | 0.0046 | 0.0138 | 0.0018 | 0.0012 | 0.0002 | 0.0048 | 0.0166 | 0.0632 | 0.1049 | 0.0024 | 0.0187 | 0.1312 | 0.0223 | 0.4442 | 0.0 | 0.0 | 0.0002 | 0.0361 | 0.0 | 0.0 | 0.0007 | 0.0444 |
| 2.9914 | 2.0 | 214 | 2.7489 | 0.0138 | 0.0444 | 0.0055 | 0.0018 | 0.0007 | 0.0142 | 0.0292 | 0.0888 | 0.1125 | 0.0024 | 0.041 | 0.1216 | 0.0664 | 0.4253 | 0.0 | 0.0 | 0.0004 | 0.0486 | 0.0 | 0.0 | 0.002 | 0.0884 |
| 2.6785 | 3.0 | 321 | 2.5782 | 0.0167 | 0.0534 | 0.0071 | 0.0016 | 0.0005 | 0.0177 | 0.0367 | 0.1002 | 0.1182 | 0.0032 | 0.0382 | 0.1326 | 0.0808 | 0.4584 | 0.0 | 0.0 | 0.0003 | 0.0475 | 0.0 | 0.0 | 0.0022 | 0.0848 |
| 2.4754 | 4.0 | 428 | 2.4978 | 0.0229 | 0.0639 | 0.0099 | 0.002 | 0.0042 | 0.0221 | 0.0417 | 0.1182 | 0.1475 | 0.0155 | 0.058 | 0.1507 | 0.1019 | 0.5318 | 0.0 | 0.0 | 0.001 | 0.0776 | 0.0 | 0.0 | 0.0118 | 0.1283 |
| 2.2601 | 5.0 | 535 | 2.2584 | 0.0342 | 0.0846 | 0.0251 | 0.0055 | 0.0057 | 0.038 | 0.0602 | 0.1313 | 0.1686 | 0.0127 | 0.0815 | 0.1747 | 0.1557 | 0.5558 | 0.0068 | 0.0273 | 0.0016 | 0.0995 | 0.0 | 0.0 | 0.007 | 0.1606 |
| 2.1746 | 6.0 | 642 | 2.0845 | 0.0536 | 0.1098 | 0.0468 | 0.006 | 0.0156 | 0.0545 | 0.0926 | 0.1654 | 0.204 | 0.0391 | 0.1055 | 0.2434 | 0.2312 | 0.5922 | 0.0181 | 0.0964 | 0.0032 | 0.1295 | 0.0 | 0.0 | 0.0157 | 0.202 |
| 2.1412 | 7.0 | 749 | 2.1784 | 0.0398 | 0.1094 | 0.0225 | 0.0064 | 0.0174 | 0.0399 | 0.0913 | 0.1778 | 0.2043 | 0.0126 | 0.126 | 0.2373 | 0.1418 | 0.5214 | 0.0394 | 0.1836 | 0.0019 | 0.1246 | 0.0 | 0.0 | 0.0161 | 0.1919 |
| 2.094 | 8.0 | 856 | 2.1212 | 0.0477 | 0.1209 | 0.027 | 0.0098 | 0.0087 | 0.0474 | 0.0939 | 0.18 | 0.2051 | 0.0503 | 0.1222 | 0.1987 | 0.2067 | 0.5435 | 0.0102 | 0.1382 | 0.0043 | 0.1355 | 0.0 | 0.0 | 0.0175 | 0.2081 |
| 2.0231 | 9.0 | 963 | 1.9452 | 0.0592 | 0.1348 | 0.0455 | 0.0093 | 0.015 | 0.0622 | 0.1093 | 0.2226 | 0.2572 | 0.0603 | 0.1648 | 0.2804 | 0.2438 | 0.6117 | 0.013 | 0.2182 | 0.0085 | 0.1995 | 0.0007 | 0.002 | 0.0299 | 0.2545 |
| 1.9703 | 10.0 | 1070 | 1.9528 | 0.0678 | 0.1508 | 0.0521 | 0.0077 | 0.0232 | 0.075 | 0.1116 | 0.2175 | 0.2571 | 0.0347 | 0.1752 | 0.2909 | 0.2741 | 0.6013 | 0.0274 | 0.2164 | 0.0052 | 0.1798 | 0.0 | 0.0 | 0.0326 | 0.2879 |
| 1.9284 | 11.0 | 1177 | 1.8626 | 0.0843 | 0.1763 | 0.0776 | 0.0263 | 0.0162 | 0.1032 | 0.1378 | 0.2587 | 0.2928 | 0.0701 | 0.1837 | 0.3684 | 0.3527 | 0.6273 | 0.0321 | 0.3509 | 0.0057 | 0.1902 | 0.0001 | 0.004 | 0.0307 | 0.2914 |
| 1.85 | 12.0 | 1284 | 1.8513 | 0.1011 | 0.207 | 0.0878 | 0.0073 | 0.0384 | 0.1079 | 0.1468 | 0.2539 | 0.2795 | 0.0377 | 0.1753 | 0.332 | 0.3676 | 0.6481 | 0.0796 | 0.2818 | 0.0085 | 0.1836 | 0.0005 | 0.012 | 0.0492 | 0.2722 |
| 1.7521 | 13.0 | 1391 | 1.7764 | 0.105 | 0.2085 | 0.0924 | 0.0254 | 0.0396 | 0.1046 | 0.1551 | 0.2772 | 0.3019 | 0.0707 | 0.1945 | 0.3994 | 0.3992 | 0.6416 | 0.0502 | 0.3018 | 0.0142 | 0.2306 | 0.0008 | 0.028 | 0.0606 | 0.3076 |
| 1.6934 | 14.0 | 1498 | 1.7405 | 0.1262 | 0.2607 | 0.1124 | 0.0365 | 0.0736 | 0.1244 | 0.1663 | 0.2984 | 0.3249 | 0.0764 | 0.2257 | 0.4122 | 0.4113 | 0.6506 | 0.1122 | 0.3527 | 0.0099 | 0.2224 | 0.0025 | 0.044 | 0.0952 | 0.3545 |
| 1.6837 | 15.0 | 1605 | 1.6992 | 0.1192 | 0.246 | 0.1049 | 0.0455 | 0.063 | 0.1248 | 0.1809 | 0.3105 | 0.3314 | 0.0763 | 0.223 | 0.4305 | 0.3945 | 0.6435 | 0.1059 | 0.3982 | 0.0159 | 0.235 | 0.0062 | 0.046 | 0.0733 | 0.3343 |
| 1.6609 | 16.0 | 1712 | 1.8307 | 0.09 | 0.2062 | 0.0676 | 0.0277 | 0.0295 | 0.107 | 0.1515 | 0.2788 | 0.3128 | 0.0831 | 0.2117 | 0.3951 | 0.3372 | 0.6084 | 0.0415 | 0.3091 | 0.0089 | 0.2426 | 0.007 | 0.08 | 0.0553 | 0.3237 |
| 1.646 | 17.0 | 1819 | 1.6690 | 0.1319 | 0.275 | 0.1164 | 0.051 | 0.0747 | 0.1526 | 0.1989 | 0.3525 | 0.384 | 0.123 | 0.2814 | 0.5094 | 0.4002 | 0.6247 | 0.065 | 0.4164 | 0.0259 | 0.2973 | 0.0627 | 0.216 | 0.1058 | 0.3657 |
| 1.6016 | 18.0 | 1926 | 1.6489 | 0.1374 | 0.3023 | 0.1079 | 0.0649 | 0.0727 | 0.171 | 0.1936 | 0.3339 | 0.3578 | 0.1163 | 0.2523 | 0.4989 | 0.4021 | 0.5877 | 0.0671 | 0.3782 | 0.0228 | 0.2798 | 0.0493 | 0.174 | 0.1458 | 0.3692 |
| 1.5686 | 19.0 | 2033 | 1.6094 | 0.1497 | 0.3101 | 0.1236 | 0.0418 | 0.0841 | 0.1755 | 0.218 | 0.3731 | 0.4009 | 0.1232 | 0.2946 | 0.5219 | 0.4527 | 0.6468 | 0.0936 | 0.4182 | 0.0263 | 0.2934 | 0.0408 | 0.266 | 0.1349 | 0.3803 |
| 1.5312 | 20.0 | 2140 | 1.5237 | 0.1732 | 0.3454 | 0.1548 | 0.0729 | 0.1081 | 0.2238 | 0.2383 | 0.3908 | 0.4182 | 0.2073 | 0.2878 | 0.5665 | 0.4815 | 0.6636 | 0.1266 | 0.44 | 0.037 | 0.3115 | 0.0266 | 0.288 | 0.1941 | 0.3879 |
| 1.4907 | 21.0 | 2247 | 1.5332 | 0.1792 | 0.3615 | 0.152 | 0.0759 | 0.1167 | 0.2209 | 0.2473 | 0.3973 | 0.4221 | 0.1877 | 0.2963 | 0.5843 | 0.4678 | 0.661 | 0.104 | 0.4491 | 0.0496 | 0.3071 | 0.0722 | 0.314 | 0.2023 | 0.3793 |
| 1.4154 | 22.0 | 2354 | 1.5248 | 0.1742 | 0.3551 | 0.1524 | 0.0867 | 0.0917 | 0.2499 | 0.2541 | 0.3974 | 0.4196 | 0.1728 | 0.2843 | 0.5959 | 0.4828 | 0.663 | 0.0991 | 0.4455 | 0.0502 | 0.294 | 0.057 | 0.342 | 0.182 | 0.3535 |
| 1.4162 | 23.0 | 2461 | 1.4762 | 0.1931 | 0.3804 | 0.1741 | 0.0783 | 0.1263 | 0.2436 | 0.2567 | 0.3938 | 0.4137 | 0.15 | 0.279 | 0.5965 | 0.4868 | 0.6935 | 0.1134 | 0.4182 | 0.0542 | 0.2863 | 0.0953 | 0.312 | 0.2156 | 0.3586 |
| 1.4624 | 24.0 | 2568 | 1.4757 | 0.173 | 0.3577 | 0.1468 | 0.0352 | 0.0993 | 0.2231 | 0.2258 | 0.3952 | 0.4155 | 0.1291 | 0.2987 | 0.5419 | 0.4726 | 0.6552 | 0.0822 | 0.42 | 0.0489 | 0.3186 | 0.0408 | 0.312 | 0.2207 | 0.3717 |
| 1.4275 | 25.0 | 2675 | 1.5116 | 0.1759 | 0.3586 | 0.1602 | 0.0576 | 0.13 | 0.2456 | 0.24 | 0.3856 | 0.4077 | 0.1952 | 0.2959 | 0.5276 | 0.4527 | 0.6643 | 0.1087 | 0.4218 | 0.057 | 0.3169 | 0.0409 | 0.278 | 0.2202 | 0.3576 |
| 1.4002 | 26.0 | 2782 | 1.4787 | 0.1954 | 0.3737 | 0.1821 | 0.0338 | 0.1296 | 0.2585 | 0.2523 | 0.4039 | 0.4237 | 0.1232 | 0.2934 | 0.6087 | 0.4823 | 0.6701 | 0.1562 | 0.4164 | 0.0361 | 0.2973 | 0.0473 | 0.37 | 0.2552 | 0.3646 |
| 1.3991 | 27.0 | 2889 | 1.5002 | 0.1946 | 0.3787 | 0.1846 | 0.0501 | 0.1434 | 0.2684 | 0.2503 | 0.4031 | 0.4282 | 0.1418 | 0.3042 | 0.5904 | 0.4605 | 0.6675 | 0.1255 | 0.4236 | 0.0438 | 0.2907 | 0.0891 | 0.382 | 0.2543 | 0.3773 |
| 1.3786 | 28.0 | 2996 | 1.4114 | 0.2167 | 0.4296 | 0.2011 | 0.055 | 0.1596 | 0.2997 | 0.2711 | 0.4271 | 0.4439 | 0.1325 | 0.3265 | 0.611 | 0.5044 | 0.6929 | 0.1611 | 0.4309 | 0.0871 | 0.3322 | 0.0671 | 0.382 | 0.2639 | 0.3813 |
| 1.3221 | 29.0 | 3103 | 1.4392 | 0.2103 | 0.4118 | 0.19 | 0.0658 | 0.1463 | 0.2814 | 0.2744 | 0.4262 | 0.4473 | 0.1744 | 0.3121 | 0.6248 | 0.4909 | 0.6864 | 0.1448 | 0.4491 | 0.0675 | 0.2929 | 0.0811 | 0.432 | 0.267 | 0.3763 |
| 1.3073 | 30.0 | 3210 | 1.3707 | 0.2243 | 0.4368 | 0.1973 | 0.0558 | 0.1679 | 0.3023 | 0.2749 | 0.4348 | 0.456 | 0.163 | 0.3312 | 0.6184 | 0.5199 | 0.6987 | 0.1462 | 0.4545 | 0.0965 | 0.3536 | 0.0996 | 0.394 | 0.2592 | 0.3793 |
| 1.2889 | 31.0 | 3317 | 1.3956 | 0.2121 | 0.4279 | 0.1948 | 0.0479 | 0.1462 | 0.2967 | 0.2589 | 0.4308 | 0.4478 | 0.1559 | 0.3281 | 0.6031 | 0.4899 | 0.6779 | 0.1249 | 0.4309 | 0.0952 | 0.3372 | 0.111 | 0.43 | 0.2395 | 0.3631 |
| 1.3164 | 32.0 | 3424 | 1.4315 | 0.2094 | 0.4238 | 0.1815 | 0.0341 | 0.1474 | 0.2662 | 0.2662 | 0.4129 | 0.4247 | 0.0857 | 0.3045 | 0.5793 | 0.4874 | 0.6766 | 0.1177 | 0.3927 | 0.0835 | 0.3027 | 0.1297 | 0.396 | 0.2288 | 0.3556 |
| 1.3735 | 33.0 | 3531 | 1.3750 | 0.2322 | 0.4495 | 0.223 | 0.0762 | 0.1597 | 0.297 | 0.3006 | 0.4463 | 0.4602 | 0.1737 | 0.3294 | 0.6188 | 0.5351 | 0.7006 | 0.1626 | 0.4545 | 0.0956 | 0.3251 | 0.1178 | 0.452 | 0.2498 | 0.3687 |
| 1.2856 | 34.0 | 3638 | 1.4287 | 0.2257 | 0.4418 | 0.2076 | 0.0646 | 0.1505 | 0.3169 | 0.2616 | 0.4221 | 0.4429 | 0.1341 | 0.3274 | 0.5719 | 0.5082 | 0.6747 | 0.1528 | 0.4509 | 0.11 | 0.3082 | 0.0935 | 0.414 | 0.264 | 0.3667 |
| 1.2558 | 35.0 | 3745 | 1.3187 | 0.2401 | 0.4657 | 0.2202 | 0.0721 | 0.1688 | 0.3091 | 0.3008 | 0.4597 | 0.4756 | 0.1618 | 0.3272 | 0.6627 | 0.5188 | 0.7032 | 0.1495 | 0.4891 | 0.1232 | 0.3415 | 0.1313 | 0.45 | 0.2776 | 0.3939 |
| 1.1862 | 36.0 | 3852 | 1.2866 | 0.2588 | 0.4753 | 0.2457 | 0.0634 | 0.1939 | 0.3323 | 0.3199 | 0.4643 | 0.4754 | 0.1514 | 0.342 | 0.6447 | 0.5416 | 0.7006 | 0.213 | 0.4745 | 0.13 | 0.3432 | 0.1414 | 0.474 | 0.2682 | 0.3848 |
| 1.2361 | 37.0 | 3959 | 1.3003 | 0.2531 | 0.4754 | 0.2405 | 0.0619 | 0.191 | 0.3391 | 0.3006 | 0.4612 | 0.4814 | 0.1487 | 0.3592 | 0.6214 | 0.5226 | 0.7039 | 0.2095 | 0.4873 | 0.1069 | 0.3678 | 0.1434 | 0.46 | 0.2832 | 0.3879 |
| 1.1919 | 38.0 | 4066 | 1.3157 | 0.2411 | 0.4507 | 0.2127 | 0.1031 | 0.1691 | 0.3298 | 0.3007 | 0.4654 | 0.4887 | 0.1863 | 0.3628 | 0.6209 | 0.5242 | 0.713 | 0.1632 | 0.4927 | 0.1206 | 0.3699 | 0.1164 | 0.478 | 0.2812 | 0.3899 |
| 1.2089 | 39.0 | 4173 | 1.3253 | 0.2378 | 0.4618 | 0.2139 | 0.06 | 0.1662 | 0.3619 | 0.2932 | 0.456 | 0.4747 | 0.1647 | 0.3502 | 0.6456 | 0.5365 | 0.687 | 0.1477 | 0.44 | 0.1341 | 0.3612 | 0.1181 | 0.512 | 0.2526 | 0.3732 |
| 1.1989 | 40.0 | 4280 | 1.3079 | 0.243 | 0.4632 | 0.2212 | 0.09 | 0.1745 | 0.3635 | 0.2888 | 0.4601 | 0.4815 | 0.1716 | 0.3643 | 0.6676 | 0.5474 | 0.711 | 0.1511 | 0.5055 | 0.1275 | 0.3596 | 0.1169 | 0.46 | 0.2721 | 0.3712 |
| 1.123 | 41.0 | 4387 | 1.2717 | 0.2545 | 0.4648 | 0.2389 | 0.0865 | 0.1746 | 0.3768 | 0.3137 | 0.4805 | 0.5019 | 0.2341 | 0.3727 | 0.6742 | 0.5567 | 0.7182 | 0.1537 | 0.5127 | 0.1243 | 0.3738 | 0.1443 | 0.494 | 0.2933 | 0.4106 |
| 1.1247 | 42.0 | 4494 | 1.3207 | 0.2547 | 0.4802 | 0.2359 | 0.0646 | 0.1709 | 0.3638 | 0.3091 | 0.4584 | 0.4749 | 0.1666 | 0.339 | 0.6682 | 0.5703 | 0.7162 | 0.2116 | 0.5055 | 0.1441 | 0.341 | 0.1132 | 0.474 | 0.2342 | 0.3379 |
| 1.127 | 43.0 | 4601 | 1.2355 | 0.2651 | 0.5025 | 0.2454 | 0.0659 | 0.2016 | 0.3784 | 0.3272 | 0.4851 | 0.5009 | 0.1616 | 0.3671 | 0.691 | 0.5439 | 0.7188 | 0.2047 | 0.5164 | 0.1539 | 0.382 | 0.1291 | 0.49 | 0.2942 | 0.3975 |
| 1.1019 | 44.0 | 4708 | 1.2395 | 0.2684 | 0.4895 | 0.2566 | 0.0892 | 0.1954 | 0.3819 | 0.3367 | 0.4887 | 0.5068 | 0.183 | 0.3718 | 0.6757 | 0.5447 | 0.7195 | 0.2254 | 0.52 | 0.1424 | 0.3781 | 0.1198 | 0.506 | 0.3096 | 0.4106 |
| 1.1069 | 45.0 | 4815 | 1.2494 | 0.2673 | 0.4805 | 0.2646 | 0.0882 | 0.1876 | 0.3995 | 0.3261 | 0.4852 | 0.5081 | 0.2162 | 0.3564 | 0.7021 | 0.5524 | 0.7149 | 0.2068 | 0.5255 | 0.1756 | 0.3907 | 0.1071 | 0.498 | 0.2945 | 0.4116 |
| 1.0929 | 46.0 | 4922 | 1.2631 | 0.2738 | 0.5088 | 0.2541 | 0.0801 | 0.1949 | 0.375 | 0.3189 | 0.4701 | 0.4968 | 0.2285 | 0.3514 | 0.6741 | 0.5683 | 0.7136 | 0.2407 | 0.5109 | 0.184 | 0.3852 | 0.1181 | 0.49 | 0.2578 | 0.3843 |
| 1.1297 | 47.0 | 5029 | 1.2737 | 0.263 | 0.4986 | 0.2373 | 0.0855 | 0.1837 | 0.3832 | 0.315 | 0.4695 | 0.4879 | 0.2051 | 0.3548 | 0.6878 | 0.564 | 0.7052 | 0.2275 | 0.5182 | 0.1486 | 0.3481 | 0.0788 | 0.462 | 0.2959 | 0.4061 |
| 1.1593 | 48.0 | 5136 | 1.2690 | 0.2644 | 0.5107 | 0.2612 | 0.0745 | 0.1985 | 0.3765 | 0.312 | 0.4634 | 0.4813 | 0.1852 | 0.3491 | 0.6642 | 0.5391 | 0.7019 | 0.2197 | 0.5073 | 0.1748 | 0.3601 | 0.1367 | 0.466 | 0.2516 | 0.3712 |
| 1.1372 | 49.0 | 5243 | 1.2779 | 0.2707 | 0.4987 | 0.2736 | 0.0876 | 0.1936 | 0.4004 | 0.3236 | 0.4708 | 0.489 | 0.2231 | 0.3449 | 0.6683 | 0.5527 | 0.7182 | 0.2438 | 0.5 | 0.1566 | 0.3667 | 0.1353 | 0.466 | 0.2652 | 0.3939 |
| 1.0796 | 50.0 | 5350 | 1.1901 | 0.2915 | 0.5427 | 0.2927 | 0.0752 | 0.2182 | 0.428 | 0.336 | 0.4896 | 0.5065 | 0.1738 | 0.3675 | 0.6844 | 0.5609 | 0.7253 | 0.2496 | 0.5073 | 0.1813 | 0.3923 | 0.1545 | 0.496 | 0.3109 | 0.4116 |
| 1.0472 | 51.0 | 5457 | 1.2350 | 0.2861 | 0.5365 | 0.2647 | 0.0842 | 0.2116 | 0.4284 | 0.3274 | 0.4874 | 0.4988 | 0.1772 | 0.3617 | 0.6809 | 0.5501 | 0.7136 | 0.247 | 0.5018 | 0.1633 | 0.3339 | 0.1554 | 0.526 | 0.315 | 0.4187 |
| 1.0605 | 52.0 | 5564 | 1.1940 | 0.2927 | 0.5423 | 0.2773 | 0.1478 | 0.2135 | 0.429 | 0.3472 | 0.49 | 0.5068 | 0.2497 | 0.3565 | 0.6921 | 0.5544 | 0.7097 | 0.2547 | 0.4818 | 0.1827 | 0.3913 | 0.1602 | 0.528 | 0.3113 | 0.4232 |
| 1.0506 | 53.0 | 5671 | 1.2127 | 0.2751 | 0.5239 | 0.2555 | 0.101 | 0.1945 | 0.4041 | 0.3218 | 0.4813 | 0.4952 | 0.2365 | 0.3461 | 0.6708 | 0.5685 | 0.724 | 0.2394 | 0.4873 | 0.1758 | 0.3634 | 0.1136 | 0.52 | 0.2784 | 0.3813 |
| 1.053 | 54.0 | 5778 | 1.1906 | 0.2994 | 0.5424 | 0.2941 | 0.1302 | 0.213 | 0.4167 | 0.3406 | 0.5033 | 0.5241 | 0.2372 | 0.3795 | 0.6998 | 0.5648 | 0.7266 | 0.2552 | 0.5273 | 0.1795 | 0.3732 | 0.176 | 0.564 | 0.3217 | 0.4293 |
| 1.0142 | 55.0 | 5885 | 1.1860 | 0.2937 | 0.5326 | 0.28 | 0.1125 | 0.2158 | 0.4177 | 0.3366 | 0.4924 | 0.5081 | 0.2158 | 0.3796 | 0.6679 | 0.567 | 0.7195 | 0.2523 | 0.5018 | 0.1687 | 0.3765 | 0.1754 | 0.534 | 0.305 | 0.4086 |
| 1.0391 | 56.0 | 5992 | 1.1970 | 0.2856 | 0.5244 | 0.2672 | 0.106 | 0.2012 | 0.4148 | 0.3396 | 0.4901 | 0.5089 | 0.2218 | 0.3697 | 0.6807 | 0.561 | 0.7065 | 0.2175 | 0.4982 | 0.1701 | 0.3661 | 0.1658 | 0.548 | 0.3133 | 0.4258 |
| 1.0031 | 57.0 | 6099 | 1.1818 | 0.2964 | 0.5385 | 0.2752 | 0.1798 | 0.2196 | 0.4145 | 0.3537 | 0.5142 | 0.5287 | 0.3015 | 0.3943 | 0.6906 | 0.5677 | 0.7247 | 0.2248 | 0.5382 | 0.1842 | 0.3809 | 0.17 | 0.562 | 0.3353 | 0.4379 |
| 0.9794 | 58.0 | 6206 | 1.1965 | 0.2903 | 0.5258 | 0.2852 | 0.1404 | 0.2064 | 0.422 | 0.3436 | 0.4984 | 0.5121 | 0.2596 | 0.3677 | 0.6908 | 0.5615 | 0.7149 | 0.2557 | 0.5091 | 0.155 | 0.3656 | 0.1543 | 0.536 | 0.3247 | 0.4348 |
| 1.0123 | 59.0 | 6313 | 1.1943 | 0.2831 | 0.5389 | 0.2672 | 0.1388 | 0.213 | 0.4055 | 0.3503 | 0.5034 | 0.5191 | 0.2761 | 0.3874 | 0.6878 | 0.5536 | 0.711 | 0.2217 | 0.52 | 0.1557 | 0.3672 | 0.1714 | 0.562 | 0.3133 | 0.4354 |
| 0.9814 | 60.0 | 6420 | 1.2012 | 0.303 | 0.5553 | 0.2868 | 0.1351 | 0.2209 | 0.4293 | 0.3507 | 0.5019 | 0.5164 | 0.2826 | 0.3747 | 0.6823 | 0.5686 | 0.7104 | 0.2642 | 0.5036 | 0.1664 | 0.3732 | 0.198 | 0.56 | 0.3177 | 0.4348 |
| 0.9583 | 61.0 | 6527 | 1.1838 | 0.3041 | 0.5548 | 0.305 | 0.1338 | 0.2199 | 0.4376 | 0.3457 | 0.4977 | 0.5123 | 0.2561 | 0.3739 | 0.6789 | 0.5853 | 0.724 | 0.2694 | 0.5164 | 0.193 | 0.3836 | 0.1766 | 0.522 | 0.2962 | 0.4157 |
| 0.9506 | 62.0 | 6634 | 1.1634 | 0.3013 | 0.5503 | 0.2856 | 0.1514 | 0.2189 | 0.4223 | 0.3473 | 0.5123 | 0.5251 | 0.2774 | 0.3852 | 0.6884 | 0.5762 | 0.7117 | 0.2389 | 0.5345 | 0.1946 | 0.3896 | 0.1656 | 0.542 | 0.3309 | 0.4475 |
| 0.9914 | 63.0 | 6741 | 1.1681 | 0.3051 | 0.5439 | 0.2905 | 0.1401 | 0.2514 | 0.4296 | 0.3476 | 0.5 | 0.5137 | 0.2761 | 0.4054 | 0.6728 | 0.5816 | 0.7143 | 0.2752 | 0.5436 | 0.1746 | 0.3656 | 0.1677 | 0.518 | 0.3264 | 0.4268 |
| 0.9516 | 64.0 | 6848 | 1.1811 | 0.2933 | 0.532 | 0.2921 | 0.1145 | 0.2285 | 0.4111 | 0.3384 | 0.4869 | 0.5034 | 0.2024 | 0.38 | 0.6794 | 0.5576 | 0.7019 | 0.2721 | 0.5291 | 0.1595 | 0.3579 | 0.1524 | 0.502 | 0.3251 | 0.4263 |
| 0.9232 | 65.0 | 6955 | 1.1625 | 0.304 | 0.551 | 0.2776 | 0.1706 | 0.2403 | 0.4162 | 0.3477 | 0.5064 | 0.5193 | 0.2514 | 0.3986 | 0.6958 | 0.5644 | 0.7188 | 0.2713 | 0.5327 | 0.1806 | 0.388 | 0.1681 | 0.526 | 0.3357 | 0.4308 |
| 0.9479 | 66.0 | 7062 | 1.1899 | 0.3017 | 0.5539 | 0.2705 | 0.157 | 0.2135 | 0.4417 | 0.36 | 0.5122 | 0.5277 | 0.2622 | 0.3857 | 0.7181 | 0.5725 | 0.7136 | 0.2637 | 0.5636 | 0.173 | 0.3814 | 0.1876 | 0.56 | 0.3117 | 0.4197 |
| 0.9041 | 67.0 | 7169 | 1.1758 | 0.3142 | 0.5585 | 0.3089 | 0.1431 | 0.2326 | 0.4294 | 0.3619 | 0.5049 | 0.5161 | 0.2519 | 0.3782 | 0.6862 | 0.5954 | 0.7266 | 0.2947 | 0.5182 | 0.1781 | 0.3617 | 0.1817 | 0.558 | 0.3213 | 0.4162 |
| 0.9385 | 68.0 | 7276 | 1.1483 | 0.3169 | 0.5782 | 0.2985 | 0.1935 | 0.232 | 0.4517 | 0.3714 | 0.5241 | 0.5357 | 0.2918 | 0.3976 | 0.7021 | 0.5802 | 0.7312 | 0.2987 | 0.5636 | 0.1982 | 0.3896 | 0.1854 | 0.566 | 0.3218 | 0.4283 |
| 0.9177 | 69.0 | 7383 | 1.1799 | 0.3144 | 0.5645 | 0.3005 | 0.2049 | 0.2287 | 0.456 | 0.3629 | 0.5138 | 0.5294 | 0.3176 | 0.3806 | 0.6951 | 0.5789 | 0.7227 | 0.2856 | 0.5655 | 0.1928 | 0.3852 | 0.1902 | 0.544 | 0.3246 | 0.4298 |
| 0.8781 | 70.0 | 7490 | 1.1637 | 0.3114 | 0.5637 | 0.2894 | 0.1957 | 0.2345 | 0.4385 | 0.3655 | 0.5149 | 0.5293 | 0.3021 | 0.3919 | 0.6927 | 0.58 | 0.7331 | 0.2764 | 0.5473 | 0.1943 | 0.3858 | 0.191 | 0.55 | 0.3151 | 0.4303 |
| 0.9337 | 71.0 | 7597 | 1.1627 | 0.3156 | 0.5643 | 0.3023 | 0.1742 | 0.229 | 0.4506 | 0.3683 | 0.517 | 0.5322 | 0.302 | 0.3825 | 0.7129 | 0.5811 | 0.7357 | 0.2999 | 0.54 | 0.1897 | 0.3831 | 0.1859 | 0.578 | 0.3212 | 0.4242 |
| 0.8986 | 72.0 | 7704 | 1.1679 | 0.3119 | 0.5658 | 0.2888 | 0.1704 | 0.24 | 0.4547 | 0.3554 | 0.5196 | 0.5338 | 0.2748 | 0.3949 | 0.7075 | 0.5799 | 0.7286 | 0.3 | 0.5855 | 0.1882 | 0.3896 | 0.1744 | 0.534 | 0.317 | 0.4313 |
| 0.8766 | 73.0 | 7811 | 1.1545 | 0.319 | 0.5724 | 0.2952 | 0.1869 | 0.2428 | 0.4541 | 0.3684 | 0.5239 | 0.5388 | 0.2905 | 0.4015 | 0.7063 | 0.5834 | 0.7325 | 0.2992 | 0.5782 | 0.1877 | 0.3913 | 0.2001 | 0.564 | 0.3247 | 0.4283 |
| 0.8668 | 74.0 | 7918 | 1.1662 | 0.3178 | 0.5811 | 0.3096 | 0.1839 | 0.2384 | 0.4517 | 0.3609 | 0.5171 | 0.5335 | 0.2927 | 0.3982 | 0.6937 | 0.5824 | 0.7344 | 0.3004 | 0.5582 | 0.1927 | 0.3863 | 0.1833 | 0.552 | 0.3304 | 0.4364 |
| 0.9248 | 75.0 | 8025 | 1.1544 | 0.3152 | 0.5751 | 0.2907 | 0.1729 | 0.2337 | 0.444 | 0.3545 | 0.5134 | 0.5281 | 0.2933 | 0.3852 | 0.705 | 0.5802 | 0.726 | 0.302 | 0.5473 | 0.1915 | 0.3891 | 0.1634 | 0.528 | 0.3389 | 0.45 |
| 0.8511 | 76.0 | 8132 | 1.1887 | 0.3263 | 0.5907 | 0.3062 | 0.1854 | 0.2392 | 0.4552 | 0.3585 | 0.5155 | 0.5295 | 0.2971 | 0.3832 | 0.7044 | 0.5832 | 0.7227 | 0.3305 | 0.5691 | 0.1884 | 0.3678 | 0.1977 | 0.542 | 0.3315 | 0.446 |
| 0.8675 | 77.0 | 8239 | 1.1739 | 0.3276 | 0.5954 | 0.3103 | 0.1956 | 0.2426 | 0.4537 | 0.361 | 0.5207 | 0.5333 | 0.2866 | 0.3944 | 0.6991 | 0.5889 | 0.7286 | 0.322 | 0.5655 | 0.2014 | 0.3825 | 0.1879 | 0.54 | 0.3379 | 0.45 |
| 0.821 | 78.0 | 8346 | 1.1699 | 0.3277 | 0.5932 | 0.313 | 0.2021 | 0.2523 | 0.4642 | 0.3718 | 0.5263 | 0.5403 | 0.283 | 0.4125 | 0.7088 | 0.5832 | 0.7299 | 0.3294 | 0.5745 | 0.2013 | 0.4011 | 0.1897 | 0.552 | 0.3347 | 0.4439 |
| 0.8467 | 79.0 | 8453 | 1.1643 | 0.332 | 0.5979 | 0.3249 | 0.2042 | 0.2473 | 0.4766 | 0.3875 | 0.5246 | 0.5332 | 0.3188 | 0.4013 | 0.6999 | 0.5873 | 0.7318 | 0.3269 | 0.5655 | 0.2038 | 0.3836 | 0.2126 | 0.544 | 0.3296 | 0.4409 |
| 0.8951 | 80.0 | 8560 | 1.1533 | 0.3359 | 0.5938 | 0.3165 | 0.2187 | 0.2461 | 0.4588 | 0.3917 | 0.5416 | 0.5544 | 0.3381 | 0.4132 | 0.7085 | 0.5918 | 0.737 | 0.3356 | 0.5836 | 0.1981 | 0.3967 | 0.2108 | 0.604 | 0.3433 | 0.4505 |
| 0.8303 | 81.0 | 8667 | 1.1637 | 0.3298 | 0.5901 | 0.306 | 0.1872 | 0.244 | 0.4758 | 0.3891 | 0.5293 | 0.5387 | 0.279 | 0.405 | 0.7137 | 0.5887 | 0.7312 | 0.3384 | 0.5727 | 0.1918 | 0.3776 | 0.1974 | 0.57 | 0.3327 | 0.4419 |
| 0.8386 | 82.0 | 8774 | 1.1539 | 0.3293 | 0.5917 | 0.3013 | 0.1962 | 0.2435 | 0.4732 | 0.3916 | 0.5291 | 0.5404 | 0.2927 | 0.4116 | 0.7092 | 0.5858 | 0.7299 | 0.3162 | 0.5764 | 0.1995 | 0.3891 | 0.2015 | 0.556 | 0.3432 | 0.4505 |
| 0.809 | 83.0 | 8881 | 1.1587 | 0.3284 | 0.5927 | 0.3001 | 0.1922 | 0.2476 | 0.4659 | 0.3861 | 0.5219 | 0.5347 | 0.2976 | 0.4068 | 0.6911 | 0.5829 | 0.726 | 0.3299 | 0.5818 | 0.1993 | 0.3918 | 0.1948 | 0.54 | 0.3351 | 0.4338 |
| 0.8205 | 84.0 | 8988 | 1.1508 | 0.3333 | 0.5942 | 0.3156 | 0.2093 | 0.2456 | 0.4866 | 0.3853 | 0.5314 | 0.5429 | 0.3255 | 0.4043 | 0.7107 | 0.5863 | 0.7279 | 0.3185 | 0.5691 | 0.203 | 0.3885 | 0.2253 | 0.592 | 0.3335 | 0.4369 |
| 0.8459 | 85.0 | 9095 | 1.1513 | 0.335 | 0.5963 | 0.3219 | 0.2008 | 0.2476 | 0.4894 | 0.3915 | 0.527 | 0.537 | 0.3072 | 0.3987 | 0.7064 | 0.5833 | 0.7234 | 0.3246 | 0.5709 | 0.2059 | 0.3814 | 0.2144 | 0.562 | 0.3465 | 0.4475 |
| 0.8008 | 86.0 | 9202 | 1.1530 | 0.3245 | 0.5927 | 0.3062 | 0.1832 | 0.2477 | 0.4705 | 0.3825 | 0.5248 | 0.5359 | 0.2794 | 0.41 | 0.6952 | 0.5836 | 0.7299 | 0.3002 | 0.5709 | 0.1856 | 0.3749 | 0.2223 | 0.57 | 0.3308 | 0.4338 |
| 0.7936 | 87.0 | 9309 | 1.1558 | 0.3266 | 0.5861 | 0.3079 | 0.1893 | 0.2374 | 0.4833 | 0.3848 | 0.5226 | 0.5357 | 0.2929 | 0.4 | 0.7015 | 0.5788 | 0.7299 | 0.3111 | 0.5745 | 0.1929 | 0.3814 | 0.2219 | 0.56 | 0.3281 | 0.4328 |
| 0.7991 | 88.0 | 9416 | 1.1667 | 0.3246 | 0.5969 | 0.2974 | 0.167 | 0.2393 | 0.49 | 0.379 | 0.5165 | 0.53 | 0.2649 | 0.4023 | 0.6997 | 0.5824 | 0.726 | 0.3114 | 0.5636 | 0.1853 | 0.3721 | 0.2107 | 0.55 | 0.3335 | 0.4384 |
| 0.7914 | 89.0 | 9523 | 1.1521 | 0.3281 | 0.5926 | 0.3144 | 0.1756 | 0.2404 | 0.4755 | 0.3851 | 0.5305 | 0.5402 | 0.2902 | 0.4018 | 0.7063 | 0.5845 | 0.7338 | 0.313 | 0.5636 | 0.2014 | 0.3863 | 0.2031 | 0.576 | 0.3386 | 0.4414 |
| 0.7722 | 90.0 | 9630 | 1.1521 | 0.3292 | 0.5978 | 0.3104 | 0.1804 | 0.2419 | 0.4764 | 0.3893 | 0.5308 | 0.5407 | 0.2913 | 0.4016 | 0.7088 | 0.5845 | 0.7253 | 0.3133 | 0.5727 | 0.1944 | 0.382 | 0.2064 | 0.57 | 0.3475 | 0.4535 |
| 0.7563 | 91.0 | 9737 | 1.1742 | 0.331 | 0.5967 | 0.3157 | 0.1783 | 0.2413 | 0.4789 | 0.393 | 0.5273 | 0.5397 | 0.302 | 0.403 | 0.7119 | 0.5886 | 0.724 | 0.3014 | 0.5673 | 0.2036 | 0.3874 | 0.2167 | 0.574 | 0.3445 | 0.446 |
| 0.7714 | 92.0 | 9844 | 1.1554 | 0.3274 | 0.5924 | 0.3159 | 0.1808 | 0.2393 | 0.4769 | 0.387 | 0.5304 | 0.5426 | 0.2972 | 0.4029 | 0.712 | 0.5857 | 0.7266 | 0.3051 | 0.5855 | 0.1966 | 0.3874 | 0.2062 | 0.568 | 0.3435 | 0.4455 |
| 0.7669 | 93.0 | 9951 | 1.1550 | 0.325 | 0.5913 | 0.2998 | 0.1885 | 0.2429 | 0.4674 | 0.3818 | 0.5262 | 0.5388 | 0.3039 | 0.4037 | 0.7082 | 0.5781 | 0.7227 | 0.301 | 0.5673 | 0.1997 | 0.3995 | 0.205 | 0.56 | 0.3414 | 0.4444 |
| 0.7811 | 94.0 | 10058 | 1.1591 | 0.3263 | 0.5903 | 0.31 | 0.1742 | 0.2371 | 0.4802 | 0.3841 | 0.5252 | 0.5356 | 0.2761 | 0.3988 | 0.7018 | 0.5788 | 0.7234 | 0.2981 | 0.5582 | 0.1989 | 0.3885 | 0.2056 | 0.558 | 0.3498 | 0.45 |
| 0.7932 | 95.0 | 10165 | 1.1517 | 0.3301 | 0.5926 | 0.3201 | 0.1803 | 0.2432 | 0.4713 | 0.3859 | 0.5256 | 0.5361 | 0.2813 | 0.402 | 0.7011 | 0.583 | 0.7292 | 0.3034 | 0.5673 | 0.2052 | 0.3918 | 0.2131 | 0.546 | 0.346 | 0.446 |
| 0.77 | 96.0 | 10272 | 1.1538 | 0.3319 | 0.5967 | 0.3222 | 0.1813 | 0.2434 | 0.4755 | 0.3891 | 0.5203 | 0.5319 | 0.2707 | 0.3965 | 0.694 | 0.5833 | 0.7299 | 0.3101 | 0.56 | 0.2049 | 0.3907 | 0.223 | 0.538 | 0.3384 | 0.4409 |
| 0.764 | 97.0 | 10379 | 1.1521 | 0.3323 | 0.6031 | 0.3131 | 0.1855 | 0.2427 | 0.4726 | 0.3908 | 0.5256 | 0.5369 | 0.2745 | 0.4053 | 0.6897 | 0.5893 | 0.7266 | 0.31 | 0.5764 | 0.1998 | 0.3913 | 0.2162 | 0.546 | 0.3464 | 0.4444 |
| 0.7517 | 98.0 | 10486 | 1.1509 | 0.3328 | 0.601 | 0.3287 | 0.1866 | 0.2448 | 0.478 | 0.3888 | 0.5252 | 0.5395 | 0.2761 | 0.4053 | 0.7042 | 0.5868 | 0.724 | 0.3141 | 0.5782 | 0.1991 | 0.3907 | 0.2159 | 0.554 | 0.3482 | 0.4505 |
| 0.7519 | 99.0 | 10593 | 1.1516 | 0.3334 | 0.597 | 0.3165 | 0.1921 | 0.2488 | 0.4761 | 0.3897 | 0.5306 | 0.5431 | 0.2851 | 0.4112 | 0.7061 | 0.5871 | 0.726 | 0.3091 | 0.58 | 0.2027 | 0.3951 | 0.2161 | 0.562 | 0.3522 | 0.4525 |
| 0.7373 | 100.0 | 10700 | 1.1507 | 0.3346 | 0.5981 | 0.3283 | 0.1903 | 0.2508 | 0.4752 | 0.3914 | 0.5305 | 0.5425 | 0.2821 | 0.4111 | 0.705 | 0.5884 | 0.7253 | 0.3168 | 0.5818 | 0.2028 | 0.3934 | 0.2102 | 0.558 | 0.3549 | 0.454 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.18.0
- Tokenizers 0.19.0
|
{"license": "apache-2.0", "tags": ["object-detection", "vision", "generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "facebook-detr-resnet-50-finetuned-10k-cppe5", "results": []}]}
|
qubvel-hf/facebook-detr-resnet-50-finetuned-10k-cppe5
| null |
[
"transformers",
"safetensors",
"detr",
"object-detection",
"vision",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:53:54+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Anakeen/Meta_Llama_3_8B_trained_TER
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:53:58+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 60
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "generated_from_trainer"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "model-index": [{"name": "outputs", "results": []}]}
|
richie-ghost/outputs
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:llama2",
"region:us"
] | null |
2024-04-24T09:54:21+00:00
|
text-generation
|
transformers
|
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to
extract features useful for downstream tasks. The model is best at what it was
pretrained for however, which is generating text from a prompt.
### Out-of-scope use
GPT-J-6B is **not** intended for deployment without fine-tuning, supervision,
and/or moderation. It is not a in itself a product and cannot be used for
human-facing interactions. For example, the model may generate harmful or
offensive text. Please evaluate the risks associated with your particular use case.
GPT-J-6B was trained on an English-language only dataset, and is thus **not**
suitable for translation or generating text in other languages.
GPT-J-6B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means GPT-J-6B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["EleutherAI/pile"]}
|
titanbot/gptj-6b-base-4bit-AWQ
| null |
[
"transformers",
"gptj",
"text-generation",
"pytorch",
"causal-lm",
"en",
"dataset:EleutherAI/pile",
"arxiv:2104.09864",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:54:46+00:00
|
null | null |
{}
|
priyasingh2109/code-llama-7b-text-to-sql-1
| null |
[
"region:us"
] | null |
2024-04-24T09:54:59+00:00
|
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
jin-cheon/code-search-net-tokenizer
| null |
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T09:56:05+00:00
|
text-generation
|
transformers
|
{}
|
titanbot/opt-125m-base-4bit-AWQ
| null |
[
"transformers",
"opt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:56:28+00:00
|
|
summarization
| null |
{"language": ["lo"], "license": "apache-2.0", "tags": ["summarization"], "metrics": ["rouge"], "pipeline_tag": "summarization"}
|
Jnjnpx/test-dummy-repo
| null |
[
"summarization",
"lo",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T09:56:39+00:00
|
|
null | null |
{"license": "apache-2.0"}
|
yyyyyyg/Llama-2-7b-chat-hf-rank64
| null |
[
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T09:56:43+00:00
|
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0
|
{"library_name": "peft", "base_model": "vilm/vinallama-7b-chat"}
|
trungtienluong/vinallama-childcare
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:vilm/vinallama-7b-chat",
"region:us"
] | null |
2024-04-24T09:57:00+00:00
|
null | null |
{}
|
KevinKibe/whisper-large-v3-finetuned
| null |
[
"region:us"
] | null |
2024-04-24T09:57:27+00:00
|
|
null | null |
{}
|
Soykot/bot
| null |
[
"region:us"
] | null |
2024-04-24T09:57:38+00:00
|
|
null | null |
{"license": "openrail"}
|
lazed-outt-for-real24/mjspeaking
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-24T09:58:12+00:00
|
|
null | null |
{}
|
Madnesss/blip2-lora-coco2017-th
| null |
[
"region:us"
] | null |
2024-04-24T09:59:08+00:00
|
|
text-generation
|
transformers
|
{}
|
titanbot/opt-6.7b-4bit-AWQ
| null |
[
"transformers",
"opt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T09:59:16+00:00
|
|
null | null |
{"license": "apache-2.0"}
|
msubhanahmed/CTran
| null |
[
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T09:59:25+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
nmdr/Gemma-1.1-2b-it-Physics-2k-The-Iron-Giant20
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:00:18+00:00
|
text-generation
|
transformers
|
{}
|
titanbot/opt-1.3b-4bit-AWQ
| null |
[
"transformers",
"opt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:01:34+00:00
|
|
null |
keras
|
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-06 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
{"library_name": "keras"}
|
AndreiUrsu/Fer2013_VGG16_35epochs
| null |
[
"keras",
"region:us"
] | null |
2024-04-24T10:02:32+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2404v5
This model is a fine-tuned version of [projecte-aina/roberta-base-ca-v2-cased-te](https://huggingface.co/projecte-aina/roberta-base-ca-v2-cased-te) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6586
- Accuracy: 0.8403
- Precision: 0.8407
- Recall: 0.8403
- F1: 0.8403
- Ratio: 0.5168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- lr_scheduler_warmup_steps: 4
- num_epochs: 4
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| 3.0551 | 0.1626 | 10 | 1.7145 | 0.5 | 0.5 | 0.5 | 0.3733 | 0.0504 |
| 1.0346 | 0.3252 | 20 | 0.8860 | 0.5336 | 0.5558 | 0.5336 | 0.4822 | 0.8151 |
| 0.8531 | 0.4878 | 30 | 0.8174 | 0.5672 | 0.6156 | 0.5672 | 0.5166 | 0.1765 |
| 0.8279 | 0.6504 | 40 | 0.7147 | 0.7563 | 0.7928 | 0.7563 | 0.7485 | 0.6765 |
| 0.744 | 0.8130 | 50 | 0.6403 | 0.8067 | 0.8099 | 0.8067 | 0.8062 | 0.5504 |
| 0.6594 | 0.9756 | 60 | 0.6299 | 0.7983 | 0.8004 | 0.7983 | 0.7980 | 0.5420 |
| 0.5973 | 1.1382 | 70 | 0.6320 | 0.8193 | 0.8204 | 0.8193 | 0.8192 | 0.5294 |
| 0.5934 | 1.3008 | 80 | 0.6306 | 0.8151 | 0.8184 | 0.8151 | 0.8147 | 0.4496 |
| 0.5529 | 1.4634 | 90 | 0.6442 | 0.8193 | 0.8195 | 0.8193 | 0.8193 | 0.5126 |
| 0.5618 | 1.6260 | 100 | 0.6186 | 0.8193 | 0.8204 | 0.8193 | 0.8192 | 0.5294 |
| 0.5632 | 1.7886 | 110 | 0.5845 | 0.8361 | 0.8373 | 0.8361 | 0.8360 | 0.5294 |
| 0.5886 | 1.9512 | 120 | 0.5752 | 0.8361 | 0.8367 | 0.8361 | 0.8361 | 0.5210 |
| 0.5596 | 2.1138 | 130 | 0.5760 | 0.8403 | 0.8407 | 0.8403 | 0.8403 | 0.5168 |
| 0.4964 | 2.2764 | 140 | 0.6181 | 0.8361 | 0.8367 | 0.8361 | 0.8361 | 0.4790 |
| 0.5014 | 2.4390 | 150 | 0.6422 | 0.8361 | 0.8381 | 0.8361 | 0.8359 | 0.5378 |
| 0.5251 | 2.6016 | 160 | 0.6033 | 0.8403 | 0.8428 | 0.8403 | 0.8401 | 0.5420 |
| 0.4723 | 2.7642 | 170 | 0.5839 | 0.8487 | 0.8503 | 0.8487 | 0.8486 | 0.5336 |
| 0.4864 | 2.9268 | 180 | 0.5837 | 0.8613 | 0.8616 | 0.8613 | 0.8613 | 0.5126 |
| 0.4512 | 3.0894 | 190 | 0.5973 | 0.8487 | 0.8491 | 0.8487 | 0.8487 | 0.5168 |
| 0.477 | 3.2520 | 200 | 0.6159 | 0.8403 | 0.8404 | 0.8403 | 0.8403 | 0.5084 |
| 0.4198 | 3.4146 | 210 | 0.6523 | 0.8403 | 0.8407 | 0.8403 | 0.8403 | 0.5168 |
| 0.4322 | 3.5772 | 220 | 0.6646 | 0.8403 | 0.8407 | 0.8403 | 0.8403 | 0.5168 |
| 0.4889 | 3.7398 | 230 | 0.6632 | 0.8403 | 0.8407 | 0.8403 | 0.8403 | 0.5168 |
| 0.4409 | 3.9024 | 240 | 0.6589 | 0.8403 | 0.8407 | 0.8403 | 0.8403 | 0.5168 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "projecte-aina/roberta-base-ca-v2-cased-te", "model-index": [{"name": "2404v5", "results": []}]}
|
adriansanz/2404v5
| null |
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:projecte-aina/roberta-base-ca-v2-cased-te",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:03:12+00:00
|
null | null |
{}
|
titanbot/llama-2-7b-4bit-ggml
| null |
[
"region:us"
] | null |
2024-04-24T10:04:38+00:00
|
|
null | null |
# MergerixT3qm7-7B
MergerixT3qm7-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: MiniMoog/Mergerix-7b-v0.3
- model: nlpguy/T3QM7
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/MergerixT3qm7-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
|
automerger/MergerixT3qm7-7B
| null |
[
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T10:05:11+00:00
|
null | null |
{}
|
ealdaz/Testing
| null |
[
"region:us"
] | null |
2024-04-24T10:05:15+00:00
|
|
null | null |
{}
|
SuperkingbasSKB/Ailone_8B
| null |
[
"region:us"
] | null |
2024-04-24T10:05:40+00:00
|
|
null |
transformers
|
{}
|
argus-ai/fips-rubert-tiny2_keywords_10e
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:05:44+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-4
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-4", "results": []}]}
|
AlignmentResearch/robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-4
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:06:39+00:00
|
text-generation
| null |
# MoMonir/Phi-3-mini-128k-instruct-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
<!-- README_GGUF.md-about-gguf start -->
### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description)
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
|
{"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation"}
|
MoMonir/Phi-3-mini-128k-instruct-GGUF
| null |
[
"gguf",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
] | null |
2024-04-24T10:06:49+00:00
|
null | null |
400000step
|
{"license": "mit"}
|
DARK-STUDIO/so-vits-svc_4.1_base_model_3.0
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-24T10:07:05+00:00
|
null |
transformers
|
# Naive Pointwise MonoBERT trained on Baidu-ULTR
A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **pointwise sigmoid cross-entropy loss on clicks**. The loss is called "naive" as we use user clicks as a signal of relevance without any additional position bias correction. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model).
## Test Results on Baidu-ULTR
Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries).
| Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 |
|------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------|
| [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 |
| [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 |
| [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 |
| [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 |
| [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 |
| [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 |
## Usage
Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository.
```Python
import jax.numpy as jnp
from src.model import CrossEncoder
model = CrossEncoder.from_pretrained(
"philipphager/baidu-ultr_uva-bert_naive-pointwise",
)
# Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens
batch = {
# Query_id for each document
"query_id": jnp.array([1, 1, 1, 1]),
# Document position in SERP
"positions": jnp.array([1, 2, 3, 4]),
# Token ids for: [CLS] Query [SEP] Document
"tokens": jnp.array([
[2, 21448, 21874, 21436, 1, 20206, 4012, 2860],
[2, 21448, 21874, 21436, 1, 16794, 4522, 2082],
[2, 21448, 21874, 21436, 1, 20206, 10082, 9773],
[2, 21448, 21874, 21436, 1, 2618, 8520, 2860],
]),
# Specify if a token id belongs to the query (0) or document (1)
"token_types": jnp.array([
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
]),
# Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False):
"attention_mask": jnp.array([
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
]),
}
outputs = model(batch, train=False)
print(outputs)
```
## Reference
```
@inproceedings{Hager2024BaiduULTR,
author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke},
title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset},
booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)},
organization = {ACM},
year = {2024},
}
```
|
{"license": "mit", "datasets": ["philipphager/baidu-ultr-pretrain", "philipphager/baidu-ultr_uva-mlm-ctr"], "metrics": ["log-likelihood", "dcg@1", "dcg@3", "dcg@5", "dcg@10", "ndcg@10", "mrr@10"], "co2_eq_emissions": {"emissions": 2090, "source": "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france", "training_type": "Pre-training", "geographical_location": "Grenoble, France", "hardware_used": "4 NVIDIA H100-80GB GPUs"}}
|
philipphager/baidu-ultr_uva-bert_naive-pointwise
| null |
[
"transformers",
"safetensors",
"bert",
"dataset:philipphager/baidu-ultr-pretrain",
"dataset:philipphager/baidu-ultr_uva-mlm-ctr",
"arxiv:2207.03051",
"arxiv:2404.02543",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:07:06+00:00
|
null |
transformers
|
# Uploaded model
- **Developed by:** saiful2024
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
|
saiful2024/llama3-8b-oig-unsloth
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:07:32+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
devkya/openai-whisper-large-v2-ko-transcribe-self
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:07:33+00:00
|
text-to-image
|
diffusers
|
# Fonglets Lunaflame Pony XL
<Gallery />
## Trigger words
You should use `lunaflame` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Fongletto/Fonglets_Lunaflame_Pony_XL/tree/main) them in the Files & versions tab.
|
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora", "not-for-all-audiences"], "widget": [{"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up,", "parameters": {"negative_prompt": "source_pony, censored, bar censor, mosaic, line censor, muscles, abs"}, "output": {"url": "images/00088-2553698826.png"}}, {"text": "-", "output": {"url": "images/00089-2010147136.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up,", "parameters": {"negative_prompt": "source_pony, censored, bar censor, mosaic, line censor, muscles, abs"}, "output": {"url": "images/00090-2949798506.png"}}], "base_model": "stablediffusionapi/pony-diffusion-v6-xl", "instance_prompt": "lunaflame"}
|
Fongletto/Fonglets_Lunaflame_Pony_XL
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"not-for-all-audiences",
"base_model:stablediffusionapi/pony-diffusion-v6-xl",
"region:us"
] | null |
2024-04-24T10:08:18+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
OwOOwO/stable-pants2
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:08:40+00:00
|
text-to-image
|
diffusers
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-Wouter01/diffusion_model_out
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: The same image but fixing small physical and illumination inconsistencies

prompt: The same image but fixing small physical and illumination inconsistencies

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training"], "base_model": "stabilityai/stable-diffusion-2-1-base", "inference": true}
|
Wouter01/diffusion_model_out
| null |
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | null |
2024-04-24T10:09:06+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-chat-hf", "model-index": [{"name": "results", "results": []}]}
|
Soykot/results
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null |
2024-04-24T10:10:08+00:00
|
null |
transformers
|
# Naive Listwise MonoBERT trained on Baidu-ULTR
A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **listwise softmax cross-entropy loss on clicks**. The loss is called "naive" as we use user clicks as a signal of relevance without any additional position bias correction. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model).
## Test Results on Baidu-ULTR
Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries).
| Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 |
|------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------|
| [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 |
| [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 |
| [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 |
| [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 |
| [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 |
| [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 |
## Usage
Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository.
```Python
import jax.numpy as jnp
from src.model import ListwiseCrossEncoder
model = ListwiseCrossEncoder.from_pretrained(
"philipphager/baidu-ultr_uva-bert_naive-listwise",
)
# Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens
batch = {
# Query_id for each document
"query_id": jnp.array([1, 1, 1, 1]),
# Document position in SERP
"positions": jnp.array([1, 2, 3, 4]),
# Token ids for: [CLS] Query [SEP] Document
"tokens": jnp.array([
[2, 21448, 21874, 21436, 1, 20206, 4012, 2860],
[2, 21448, 21874, 21436, 1, 16794, 4522, 2082],
[2, 21448, 21874, 21436, 1, 20206, 10082, 9773],
[2, 21448, 21874, 21436, 1, 2618, 8520, 2860],
]),
# Specify if a token id belongs to the query (0) or document (1)
"token_types": jnp.array([
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1],
]),
# Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False):
"attention_mask": jnp.array([
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True],
]),
}
outputs = model(batch, train=False)
print(outputs)
```
## Reference
```
@inproceedings{Hager2024BaiduULTR,
author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke},
title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset},
booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)},
organization = {ACM},
year = {2024},
}
```
|
{"license": "mit", "datasets": ["philipphager/baidu-ultr-pretrain", "philipphager/baidu-ultr_uva-mlm-ctr"], "metrics": ["log-likelihood", "dcg@1", "dcg@3", "dcg@5", "dcg@10", "ndcg@10", "mrr@10"], "co2_eq_emissions": {"emissions": 2090, "source": "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france", "training_type": "Pre-training", "geographical_location": "Grenoble, France", "hardware_used": "4 NVIDIA H100-80GB GPUs"}}
|
philipphager/baidu-ultr_uva-bert_naive-listwise
| null |
[
"transformers",
"safetensors",
"bert",
"dataset:philipphager/baidu-ultr-pretrain",
"dataset:philipphager/baidu-ultr_uva-mlm-ctr",
"arxiv:2207.03051",
"arxiv:2404.02543",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:10:10+00:00
|
text2text-generation
|
transformers
|
{"language": ["en"], "tags": ["code"], "metrics": ["rouge"]}
|
shubh-2896/Summary
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"code",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:10:10+00:00
|
|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Kn
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"language": ["hi"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Kn", "results": []}]}
|
amithm3/whisper-small-hi
| null |
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:11:33+00:00
|
null | null |
{}
|
Tavernari/git-commit-message-lora-model
| null |
[
"region:us"
] | null |
2024-04-24T10:11:44+00:00
|
|
text-generation
|
transformers
|
{}
|
chinmayc3/codellama-sql-7b-quantized
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-24T10:12:10+00:00
|
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_3iters_bs256_nodpo_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_ablation_3iters_bs256_nodpo_iter_1", "results": []}]}
|
ShenaoZ/0.001_ablation_3iters_bs256_nodpo_iter_1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:12:10+00:00
|
text-generation
|
transformers
|
# GoPhos Quantized Model
## Overview
This repository hosts the quantized version of the GoPhos model, specifically optimized for interpreting Sophos logs exported from Splunk. The model is available for easy integration and usage through the `mlx-lm` library, facilitating seamless log interpretation tasks.
## Model Description
The GoPhos model has been quantized to improve its efficiency and reduce memory footprint while retaining its interpretational capabilities for Sophos logs. Through quantization, the model achieves faster inference times and reduced resource consumption, making it ideal for deployment in resource-constrained environments.
## Usage
To utilize the quantized GoPhos model, follow these simple steps:
1. Install the `mlx-lm` library:
```bash
pip install mlx-lm
```
2. Load the model and tokenizer:
```python
from mlx_lm import load, generate
model, tokenizer = load("SadokBarbouche/gophos-quantized")
```
3. Generate log interpretations:
```python
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
## Evaluation
The quantized GoPhos model has been evaluated for its interpretational accuracy and efficiency, demonstrating performance comparable to the original model while achieving faster inference times and reduced memory usage.
## Acknowledgements
We extend our gratitude to the creators of the original GoPhos model for their pioneering work in log interpretation. Additionally, we thank the developers of the `mlx-lm` library for providing a convenient interface for model loading and generation.
|
{}
|
SadokBarbouche/gophos-quantized
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:12:25+00:00
|
null | null |
{}
|
Bou87/Niniai
| null |
[
"region:us"
] | null |
2024-04-24T10:13:05+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-31m_mz-130_PasswordMatch_n-its-10-seed-2
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-130_PasswordMatch_n-its-10-seed-2", "results": []}]}
|
AlignmentResearch/robust_llm_pythia-31m_mz-130_PasswordMatch_n-its-10-seed-2
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:13:12+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-31m_mz-130_PasswordMatch_n-its-10-seed-1
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-130_PasswordMatch_n-its-10-seed-1", "results": []}]}
|
AlignmentResearch/robust_llm_pythia-31m_mz-130_PasswordMatch_n-its-10-seed-1
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:14:22+00:00
|
text-generation
|
transformers
|
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## LeoLM Chat
`LeoLM/leo-hessianai-7b-chat` is a German chat model built on our foundation model `LeoLM/leo-hessianai-7b` and finetuned on a selection of German instruction datasets.
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench-DE scores:
```
{
"first_turn": 5.75,
"second_turn": 4.45,
"categories": {
"writing": 5.875,
"roleplay": 6.3,
"reasoning": 3.5,
"math": 2.85,
"coding": 2.95,
"extraction": 4.3,
"stem": 7.4,
"humanities": 7.625
},
"average": 5.1
}
```
## Model Details
- **Finetuned from:** [LeoLM/leo-hessianai-7b](https://huggingface.co/LeoLM/leo-hessianai-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **Demo:** [Web Demo]()
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import pipeline
import torch
system_prompt = """<|im_start|>system
Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
Der Assistent gibt ausführliche, hilfreiche und ehrliche Antworten.<|im_end|>
"""
prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
prompt = "Erkläre mir wie die Fahrradwegesituation in Hamburg ist."
generator = pipeline(model="LeoLM/leo-hessianai-7b-chat", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False
print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
```
"*Als KI kann ich keine persönlichen Beobachtungen teilen, aber ich kann einige allgemeine Informationen zur Fahrradwegesituation in Hamburg liefern. Im Vergleich zu vielen anderen großen Städten hat Hamburg eine hohe Anzahl von Fahrradfahrern und nimmt seine Verantwortung für nachhaltige Verkehrsmittel sehr ernst. Es gibt viele Fahrradwege und separate Fahrspuren, die Radfahrern ermöglichen, zusammen mit dem Autoverkehr zu fahren. Diese Fahrradspuren sind oft mit Markierungen gekennzeichnet und durch physische Trennungen von anderen Fahrspuren abgegrenzt. Darüber hinaus gibt es viele Fahrradstraßen, auf denen Radfahrer Vorfahrt haben und Autos langsamer fahren müssen.*
*In einigen städtischen Gebieten können Fahrradwege jedoch eng oder überfüllt sein, besonders während der Stoßzeiten. Es gibt auch viele Kreuzungen, an denen Radfahrer anhalten und auf Grün warten müssen, ähnlich wie Autofahrer. Insgesamt ist die Fahrradinfrastruktur in Hamburg ziemlich gut, aber wie überall gibt es immer Raum für Verbesserungen.*"
## Prompting / Prompt Template
Prompt dialogue template (ChatML format):
```
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
```
The model input can contain multiple conversation turns between user and assistant, e.g.
```
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
```
## Ethical Considerations and Limitations
LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-7b-chat` cannot be predicted
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-7b-chat`, developers should
perform safety testing and tuning tailored to their specific applications of the model.
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
## Finetuning Details
| Hyperparameter | Value |
|---|---|
| Num epochs | 3 |
| Examples per epoch | 131214 |
| Global batch size | 256 |
| Learning rate | 3e-5 |
| Warmup steps | 100 |
| LR scheduler | Cosine |
| Adam betas | (0.9, 0.95) |
## Dataset Details
```
## Stats for 'Subset of OpenAssistant/OASST-DE' (3534 samples (100.0%))
-----------------
Accepted: 3534/3534 (100.0%)
Accepted tokens: 2259302
Skipped: 0 (0.0%)
Min tokens per sample: 29
Max tokens per sample: 2484
Avg tokens per sample: 639.3044708545557
-----------------
## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
-----------------
Accepted: 57841/57841 (100.0%)
Accepted tokens: 42958192
Skipped: 0 (0.0%)
Min tokens per sample: 33
Max tokens per sample: 5507
Avg tokens per sample: 742.6944900675991
-----------------
## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
-----------------
Accepted: 48969/48969 (100.0%)
Accepted tokens: 13372005
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 1359
Avg tokens per sample: 273.07082031489307
-----------------
## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
-----------------
Accepted: 21314/21314 (100.0%)
Accepted tokens: 8134690
Skipped: 0 (0.0%)
Min tokens per sample: 25
Max tokens per sample: 1202
Avg tokens per sample: 381.65947264708643
-----------------
## Stats for 'Subset of LeoLM/German_Poems' (490 samples (100.0%))
-----------------
Accepted: 490/490 (100.0%)
Accepted tokens: 618642
Skipped: 0 (0.0%)
Min tokens per sample: 747
Max tokens per sample: 1678
Avg tokens per sample: 1262.534693877551
-----------------
## Stats for 'Subset of LeoLM/German_Songs' (392 samples (100.0%))
-----------------
Accepted: 392/392 (100.0%)
Accepted tokens: 187897
Skipped: 0 (0.0%)
Min tokens per sample: 231
Max tokens per sample: 826
Avg tokens per sample: 479.3290816326531
-----------------
## Stats for 'total' (132540 samples (100.0%))
-----------------
Accepted: 132540/132540 (100.0%)
Accepted tokens: 67530728
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 5507
Avg tokens per sample: 509.51205673758864
-----------------
```
|
{"language": ["en", "de"], "library_name": "transformers", "datasets": ["LeoLM/OpenSchnabeltier", "OpenAssistant/OASST-DE", "FreedomIntelligence/alpaca-gpt4-deutsch", "FreedomIntelligence/evol-instruct-deutsch", "LeoLM/German_Poems", "LeoLM/German_Songs"], "pipeline_tag": "text-generation"}
|
titanbot/LeoLM-hessianai-7b-chat
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"de",
"dataset:LeoLM/OpenSchnabeltier",
"dataset:OpenAssistant/OASST-DE",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:LeoLM/German_Poems",
"dataset:LeoLM/German_Songs",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:14:26+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Marco127/llamantino_hodi_relabeled
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:14:54+00:00
|
null | null |
{}
|
Seclusion/llama-8b-qlora-ultrachat
| null |
[
"region:us"
] | null |
2024-04-24T10:15:03+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2404v6
This model is a fine-tuned version of [projecte-aina/roberta-base-ca-v2-cased-te](https://huggingface.co/projecte-aina/roberta-base-ca-v2-cased-te) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5939
- Accuracy: 0.8445
- Precision: 0.8451
- Recall: 0.8445
- F1: 0.8445
- Ratio: 0.4790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- lr_scheduler_warmup_steps: 4
- num_epochs: 2
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| 0.5731 | 0.2597 | 10 | 0.5646 | 0.8613 | 0.8645 | 0.8613 | 0.8610 | 0.4538 |
| 0.5515 | 0.5195 | 20 | 0.5569 | 0.8613 | 0.8626 | 0.8613 | 0.8612 | 0.4706 |
| 0.5164 | 0.7792 | 30 | 0.6079 | 0.8277 | 0.8363 | 0.8277 | 0.8266 | 0.5798 |
| 0.5641 | 1.0390 | 40 | 0.5728 | 0.8571 | 0.8608 | 0.8571 | 0.8568 | 0.4496 |
| 0.4665 | 1.2987 | 50 | 0.5992 | 0.8403 | 0.8407 | 0.8403 | 0.8403 | 0.5168 |
| 0.4632 | 1.5584 | 60 | 0.5990 | 0.8613 | 0.8634 | 0.8613 | 0.8611 | 0.4622 |
| 0.4456 | 1.8182 | 70 | 0.5939 | 0.8445 | 0.8451 | 0.8445 | 0.8445 | 0.4790 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "projecte-aina/roberta-base-ca-v2-cased-te", "model-index": [{"name": "2404v6", "results": []}]}
|
adriansanz/2404v6
| null |
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:projecte-aina/roberta-base-ca-v2-cased-te",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:16:04+00:00
|
null | null |
# Bi-LSTM for Evidence Detection
This repository contains the implementation of a bi-directional Long Short-Term Memory (Bi-LSTM) network designed for the task of evidence detection in text data. The model processes and classifies textual inputs to identify information that can be considered as evidence within a given context.
## Project Structure
```
Bi-LSTM/
│
├── ED_BiLSTM.ipynb - Jupyter notebook with the model's code.
├── my_model_card.md - Model card detailing the model's purpose and characteristics.
├── requirements.txt - List of python libraries required to run the code.
├── data/ - Directory containing sample data (if applicable).
├── models/ - Directory containing saved model weights and architecture (if applicable).
├── scripts/ - Supporting scripts for data preprocessing or model inference.
└── README.md - This file.
```
## Setup
To set up the necessary environment:
1. Clone the repository:
```bash
git clone https://huggingface.co/azibbbbb/Bi-LSTM
```
2. Install the required packages:
```bash
pip install -r requirements.txt
```
## Usage
To run the model, open the ED_BiLSTM.ipynb notebook in a Jupyter environment and execute the cells sequentially.
## Model Details
Model Type: Classification
Architecture: Bi-LSTM
Training Data: Over 23,000 training instances.
Evaluation Data: Over 5,000 test instances.
Metrics: Accuracy, Precision, Recall, F1-Score, ROC AUC.
For more detailed information, refer to the model card.
## Contributing
Contributions to this project are welcome. Please follow the existing coding style and add unit tests for any new or changed functionality. Ensure to update the README with any significant changes.
## License
This project is licensed under the CC BY 4.0 - see the LICENSE file for details.
## Contact
For any questions or concerns, please open an issue in the repository, or directly reach out to the maintainers via email.
## Acknowledgments
If your project has external contributors or if you've used resources created by others, acknowledge them here. Include any other acknowledgments you deem necessary.
|
{}
|
azibbbbb/Bi-LSTM
| null |
[
"region:us"
] | null |
2024-04-24T10:16:19+00:00
|
text2text-generation
|
transformers
|
{}
|
himanshubeniwal/mbart-kk-en-c_MOC_2
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:16:32+00:00
|
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.01_ablation_3iters_bs256_nodpo_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.01_ablation_3iters_bs256_nodpo_iter_1", "results": []}]}
|
ShenaoZ/0.01_ablation_3iters_bs256_nodpo_iter_1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T10:17:05+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
la-min/mGPT_Myanmar
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:17:07+00:00
|
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1028
- Accuracy: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.056 | 1.0 | 517 | 0.1028 | 0.9699 |
| 0.0627 | 2.0 | 1034 | 0.1100 | 0.9699 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["image-classification", "vision", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "pipeline_tag": "image-classification", "model-index": [{"name": "vit-base-beans", "results": []}]}
|
ealdaz/vit-base-beans
| null |
[
"transformers",
"safetensors",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T10:17:20+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.