Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {"license": "openrail"} | otmanabs/blooomai | null | [
"safetensors",
"license:openrail",
"region:us"
]
| null | 2024-04-28T17:15:11+00:00 |
|
token-classification | transformers | {} | AliSaadatV/esm2_t12_35M_UR50D-finetuned-COILED_earlystop_70_15_15 | null | [
"transformers",
"tensorboard",
"safetensors",
"esm",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:15:45+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** armanbabayan
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-2-7b-chat-hf
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. | {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "meta-llama/Llama-2-7b-chat-hf"} | armanbabayan/Llama2_Immigration_Low_Chat | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:15:48+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | sid-th26/gemma-mcq-question-all-data | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-28T17:17:42+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | SlimCognito/wonkamodel | null | [
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:18:42+00:00 |
text-generation | transformers |
# π³ Arabic ORPO LLAMA 3
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6116d0584ef9fdfbf45dc4d9/3ns3O_bWYxKEXmozA073h.png">
</center>
## π Story first
This model is the a finetuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using [ORPO](https://github.com/xfactlab/orpo) on [2A2I/argilla-dpo-mix-7k-arabic](https://huggingface.co/datasets/2A2I/argilla-dpo-mix-7k-arabic).
I wanted to try ORPO and see if it will better align a biased English model like **llama3** to the arabic language or it will faill.
While the evaluations favour the base llama3 over my finetune, in practice i found my finetune was much better at spitting coherent (mostly correct) arabic text which i find interesting.
I would encourage everyone to try out the model from [here](https://huggingface.co/spaces/MohamedRashad/Arabic-Chatbot-Arena) and share his insights with me ^^
## π€ Evaluation and Results
This result was made using [lighteval](https://github.com/huggingface/lighteval) with the __community|arabic_mmlu__ tasks.
| Community | Llama-3-8B-Instruct | Arabic-ORPO-Llama-3-8B-Instrcut |
|----------------------------------|---------------------|----------------------------------|
| **All** | **0.348** | **0.317** |
| Abstract Algebra | 0.310 | 0.230 |
| Anatomy | 0.385 | 0.348 |
| Astronomy | 0.388 | 0.316 |
| Business Ethics | 0.480 | 0.370 |
| Clinical Knowledge | 0.396 | 0.385 |
| College Biology | 0.347 | 0.299 |
| College Chemistry | 0.180 | 0.250 |
| College Computer Science | 0.250 | 0.190 |
| College Mathematics | 0.260 | 0.280 |
| College Medicine | 0.231 | 0.249 |
| College Physics | 0.225 | 0.216 |
| Computer Security | 0.470 | 0.440 |
| Conceptual Physics | 0.315 | 0.404 |
| Econometrics | 0.263 | 0.272 |
| Electrical Engineering | 0.414 | 0.359 |
| Elementary Mathematics | 0.320 | 0.272 |
| Formal Logic | 0.270 | 0.214 |
| Global Facts | 0.320 | 0.320 |
| High School Biology | 0.332 | 0.335 |
| High School Chemistry | 0.256 | 0.296 |
| High School Computer Science | 0.350 | 0.300 |
| High School European History | 0.224 | 0.242 |
| High School Geography | 0.323 | 0.364 |
| High School Government & Politics| 0.352 | 0.285 |
| High School Macroeconomics | 0.290 | 0.285 |
| High School Mathematics | 0.237 | 0.278 |
| High School Microeconomics | 0.231 | 0.273 |
| High School Physics | 0.252 | 0.225 |
| High School Psychology | 0.316 | 0.330 |
| High School Statistics | 0.199 | 0.176 |
| High School US History | 0.284 | 0.250 |
| High School World History | 0.312 | 0.274 |
| Human Aging | 0.369 | 0.430 |
| Human Sexuality | 0.481 | 0.321 |
| International Law | 0.603 | 0.405 |
| Jurisprudence | 0.491 | 0.370 |
| Logical Fallacies | 0.368 | 0.276 |
| Machine Learning | 0.214 | 0.312 |
| Management | 0.350 | 0.379 |
| Marketing | 0.521 | 0.547 |
| Medical Genetics | 0.320 | 0.330 |
| Miscellaneous | 0.446 | 0.443 |
| Moral Disputes | 0.422 | 0.306 |
| Moral Scenarios | 0.248 | 0.241 |
| Nutrition | 0.412 | 0.346 |
| Philosophy | 0.408 | 0.328 |
| Prehistory | 0.429 | 0.349 |
| Professional Accounting | 0.344 | 0.273 |
| Professional Law | 0.306 | 0.244 |
| Professional Medicine | 0.228 | 0.206 |
| Professional Psychology | 0.337 | 0.315 |
| Public Relations | 0.391 | 0.373 |
| Security Studies | 0.469 | 0.335 |
| Sociology | 0.498 | 0.408 |
| US Foreign Policy | 0.590 | 0.490 |
| Virology | 0.422 | 0.416 |
| World Religions | 0.404 | 0.304 |
| Average (All Communities) | 0.348 | 0.317 |
| {"language": ["ar"], "license": "llama3", "library_name": "transformers", "datasets": ["2A2I/argilla-dpo-mix-7k-arabic"], "pipeline_tag": "text-generation"} | MohamedRashad/Arabic-Orpo-Llama-3-8B-Instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ar",
"dataset:2A2I/argilla-dpo-mix-7k-arabic",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:18:51+00:00 |
text2text-generation | transformers | {} | lkid08/25k_training_w_anglebraces_28-04 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:21:06+00:00 |
|
null | null | {"license": "openrail"} | janikovakov/Kalamarko_Squidward | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-28T17:22:08+00:00 |
|
null | null | {} | emmzee/idefics-9b-doodles | null | [
"region:us"
]
| null | 2024-04-28T17:22:31+00:00 |
|
text-generation | transformers |
<img src=https://huggingface.co/lodrick-the-lafted/Olethros-8B/resolve/main/olethros.png>
L3-8b-Instruct tuned on roughly 6000 Opus generations in the hopes of adding a bit of sovl. | {"license": "llama3", "datasets": ["lodrick-the-lafted/OpusStories", "lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K", "lodrick-the-lafted/Samantha-Opus", "lodrick-the-lafted/Worldsim-Opus"]} | blockblockblock/Olethros-8B-bpw4.8-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/OpusStories",
"dataset:lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K",
"dataset:lodrick-the-lafted/Samantha-Opus",
"dataset:lodrick-the-lafted/Worldsim-Opus",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:25:31+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-yelp
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "distilbert-yelp", "results": []}]} | huiang/distilbert-yelp | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:26:05+00:00 |
token-classification | transformers | {} | AliSaadatV/esm2_t12_35M_UR50D-finetuned-COMPBIAS_earlystop_70_15_15 | null | [
"transformers",
"tensorboard",
"safetensors",
"esm",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:26:10+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tomaszki/llama-12 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:26:16+00:00 |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | presencesw/phobert-large-vinli-3-label | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:26:53+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | dsodhia/gemma_peft_model_emotion_detection | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:27:20+00:00 |
token-classification | transformers | {"license": "mit"} | xlreator/snomed-canine-s | null | [
"transformers",
"safetensors",
"canine",
"token-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:28:21+00:00 |
|
null | null | {"license": "apache-2.0"} | Abhishek4623/TailsAi | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-04-28T17:30:41+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tomaszki/llama-12-a | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:30:47+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** richie-ghost
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/tinyllama-bnb-4bit"} | richie-ghost/unsloth-tiny-llama-GGUF | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:34:28+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/0zf8wav | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:36:27+00:00 |
null | null | {} | dasfdsewfdsf/eye | null | [
"region:us"
]
| null | 2024-04-28T17:37:15+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/b6nq8hv | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:38:16+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tomaszki/llama-12-b | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:38:36+00:00 |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | guna-2222/NLP_task2 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:39:35+00:00 |
text-generation | transformers |
<img src=https://huggingface.co/lodrick-the-lafted/Olethros-8B/resolve/main/olethros.png>
L3-8b-Instruct tuned on roughly 6000 Opus generations in the hopes of adding a bit of sovl. | {"license": "llama3", "datasets": ["lodrick-the-lafted/OpusStories", "lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K", "lodrick-the-lafted/Samantha-Opus", "lodrick-the-lafted/Worldsim-Opus"]} | blockblockblock/Olethros-8B-bpw5-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/OpusStories",
"dataset:lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K",
"dataset:lodrick-the-lafted/Samantha-Opus",
"dataset:lodrick-the-lafted/Worldsim-Opus",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"region:us"
]
| null | 2024-04-28T17:40:44+00:00 |
null | null | {"license": "openrail"} | Anderkill/MoyPop | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-28T17:41:21+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_PhayaThaiBert
This model is a fine-tuned version of [SuratanBoonpong/Phayathaibert_sentiment_analysis](https://huggingface.co/SuratanBoonpong/Phayathaibert_sentiment_analysis) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "base_model": "SuratanBoonpong/Phayathaibert_sentiment_analysis", "model-index": [{"name": "model_PhayaThaiBert", "results": []}]} | tidarat/model_PhayaThaiBert | null | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:SuratanBoonpong/Phayathaibert_sentiment_analysis",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:43:26+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/fnfnyn6 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:43:46+00:00 |
null | transformers | {"license": "openrail"} | mubashir32/KidzAiLlama2 | null | [
"transformers",
"license:openrail",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:46:01+00:00 |
|
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ornelas7/model-text-classification-finbert | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:46:02+00:00 |
null | peft |
# Oolong
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit) on the the identity, alpaca_gpt4_en, nectar_sft, slimorca, and wikiqa datasets.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["llama-factory", "lora", "unsloth", "generated_from_trainer"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit", "model-index": [{"name": "oolong_llama3_lora", "results": []}]} | tarob0ba/Oolong-Llama-3-8B-lora | null | [
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:other",
"region:us"
]
| null | 2024-04-28T17:46:20+00:00 |
text-generation | transformers | {} | yirenc/Meta-Llama-3-8B-on-truthfulQA_first_500_all_correct_answer | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:46:37+00:00 |
|
text2text-generation | null |
# wendys-llc/unsloth-attempt-Q8_0-GGUF
This model was converted to GGUF format from [`wendys-llc/unsloth-attempt`](https://huggingface.co/wendys-llc/unsloth-attempt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/wendys-llc/unsloth-attempt) for more details on the model.
## Prompt
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Use the Input below to explain a task or topic
### Input:
{}
### Response:
{}
```
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo wendys-llc/unsloth-attempt-Q8_0-GGUF --model unsloth-attempt.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo wendys-llc/unsloth-attempt-Q8_0-GGUF --model unsloth-attempt.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m unsloth-attempt.Q8_0.gguf -n 128
``` | {"tags": ["llama-cpp", "gguf-my-repo", "text-generation-inference"], "datasets": ["wendys-llc/domestic-receipts"], "pipeline_tag": "text2text-generation"} | wendys-llc/unsloth-attempt-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"text2text-generation",
"dataset:wendys-llc/domestic-receipts",
"region:us"
]
| null | 2024-04-28T17:47:05+00:00 |
summarization | transformers | # Model Card
This is an Estonian Parliament stenograms summarization model. Model is trained on the [et_parliament_stenos_summary](https://huggingface.co/datasets/rristo/et_parliament_stenos_summary) dataset which consists of Parliament dialogues/talks.
### Model Description
Reason for creating this model is related to experiment if there would be possible to simply train Estonian summarization model which is has longer input sequence length than 1024 tokens.
- **Model type:** T5
- **Language(s) (NLP):** Estonian
- **Finetuned from model:** [agemagician/mlong-t5-tglobal-base](https://huggingface.co/agemagician/mlong-t5-tglobal-base). Vocabulary of the original model was reduced to keep only tokens present in training data.
- **Maximum input sequence (tokens):** 2048
## Uses
### Direct Use
Model is tended to be used summarizing Estonian Parliament talks stenograms. It might work with somewhat reasonable accurary with other Estonian texts.
## Bias, Risks, and Limitations
Biases coming from the original pre-trained model and from Estonian Parliament dataset (and GPT-3.5 which was used to create training data summaries) are probably present in the model. No extensive study has been made.
### Recommendations
Don't use model in case you need very accurate results, model might miss important aspects from the original text and hallucinate.
## How to Get Started with the Model
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("rristo/mlong-t5-tglobal-base-et-riigikogu-summary")
model = AutoModelForSeq2SeqLM.from_pretrained("rristo/mlong-t5-tglobal-base-et-riigikogu-summary")
text="""Varasematest uuringutest on teada, et punetav nΓ€gu vΓ΅ib mΓ€rku anda erutusest nΓ€iteks aaradel ja raisakotkastel. Sestap huvitas Tours'i Γlikooli etoloog Delphine Soulet'd ja tema kolleege, kas sarnast tundemΓ€rki vΓ΅ib nΓ€ha ka kodukanade (Gallus gallus domesticus) nΓ€gudel.
TΓΆΓΆrΓΌhm filmis esmalt kuut Sussexi tΓ΅ugu kana erinevates olukordades. MΓ΅nes olukorras toimetasid kanad loomulikult omasoodu, teistes aga juhtisid uurijad lindude tegevust. PΓ΅nevates ja autasu tΓ΅otavates olukordades lasi tΓΆΓΆrΓΌhm kanadel vΓ΅tta tolmuvanni vΓ΅i sΓΆΓΆtis neid ussikestega. Hirmuga seotud olukordades pΓΌΓΌdsid uurijad linde kΓ€sitsi kinni.
Katsete jΓ€rel oli tΓΆΓΆrΓΌhma pΓ€ralt videosalvestistest vΓ΅etud tuhandeid ΓΌksikkaadreid. Just nende analΓΌΓΌsiks loodud algoritmi toel said uurijad tΓ€pselt jΓ€lgida, kui punased olid igas olukorras kanade hari, pΓ΅sed, kΓ΅rvanibud ja lotid.
Tâârühma sánul oli uuringu valim vÀike, mistáttu vajavad tulemused kinnitamist suuremas kordusuuringus. Siiski ilmneb tulemustest, et vÀhem punetavad pásed ja kárvanibud váivad viidata linnu rahulikule ja ráámsale seisundile. Vastukaaluks nÀib punetavam nÀgu mÀrku andvat linnu suuremast emotsionaalsest erutusest. Sinna hulka kuuluvad nii ussikeste saamisega seotud elevus kui ka hirm.
Soulet ja kolleegid tegid veel ΓΌhe katse, kus jaotasid 25 Sussexi tΓ΅ugu kana kahte rΓΌhma. Uurijad kΓ€isid viie nΓ€dala jooksul 13 linnu juures, et kanu pisitasa inimese kohaoluga harjutada. ΓlejÀÀnud 12 lindu jΓ€eti viieks nΓ€dalaks kontrollrΓΌhmana omapΓ€i.
Kui siis kΓ΅ik kanad viie nΓ€dala mΓΆΓΆdudes uuesti inimestega kokku puutusid, ilmnes kahe kanarΓΌhma vahel selge vahe. Uurijatega harjunud linnud pelgasid inimest vΓ€hem ja muutusid nende juuresolekul nΓ€ost vΓ€hem punaseks, kui nende ΓΌksi jΓ€etud liigikaaslased."""
def summarize(text, model, tokenizer, max_new_tokens=512, device='cuda'):
input_ids = tokenizer(
text, return_tensors="pt"
).input_ids # Batch size 1
outputs = model.generate(input_ids=input_ids.to(device), max_new_tokens=max_new_tokens)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
DEVICE='cuda'
model=model.to(DEVICE)
summarize(text, model, tokenizer, device=DEVICE)
```
## Training Details
### Training Data
- [et_parliament_stenos_summary](https://huggingface.co/datasets/rristo/et_parliament_stenos_summary)
### Training Procedure
Training notebook is available [here](https://github.com/RRisto/longer_text_summary/blob/main/training/mLongT5/long_mt5_base_et_finetune_rk.ipynb)
Explanation of the process could be found [here](https://ristohinno.medium.com/estonian-longer-text-summarization-8ddbf7f7cd45).
#### Training Hyperparameters
- **Training regime:** fp32
- **learning_rate:** 5e-5
- **num_train_epochs:** 12
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
Test data is from [et_parliament_stenos_summary](https://huggingface.co/datasets/rristo/et_parliament_stenos_summary) test set, which contains stenograms not present in the training data.
#### Metrics and results
- rouge1: 36.1651
- rouge2: 15.9668
- rougeL: 28.339
- rougeLsum: 33.767
| {"language": ["et"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["rristo/et_parliament_stenos_summary"], "metrics": [{"name": "rouge1", "type": "rouge1", "value": 36.1651, "verified": false}, {"name": "rouge2", "type": "rouge2", "value": 15.9668, "verified": false}, {"name": "rougeL", "type": "rougeL", "value": 28.339, "verified": false}, {"name": "rougeLsum", "type": "rougeLsum", "value": 33.767, "verified": false}], "pipeline_tag": "summarization"} | rristo/mlong-t5-tglobal-base-et-riigikogu-summary | null | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"summarization",
"et",
"dataset:rristo/et_parliament_stenos_summary",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:48:37+00:00 |
text-generation | transformers |
# Aether 7b DPO!
- **Developed by:** xi0v
# Model Description
**Aether-7B-Chat-v1.0** is a 7 billion parameter GPT-like model, primarily trained in English. It is fine-tuned from the _unsloth/zephyr-sft-bnb-4bit_ model. The model was trained using Direct Preference Optimization (DPO), which has proven to be effective in enhancing the performance of language models.
# Intended Uses & Limitations
Aether-7B-Chat-v1.0 is intended to be used as a helpful AI assistant, capable of answering questions, providing explanations, and generating text. The model is trained to act as a helpful AI assistant
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "dpo"], "base_model": "unsloth/zephyr-sft-bnb-4bit"} | xi0v/aether-7b-chat-v1.0 | null | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:unsloth/zephyr-sft-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2024-04-28T17:48:44+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tulu2-7b-cost-UF-both-5e-7
This model is a fine-tuned version of [allenai/tulu-2-7b](https://huggingface.co/allenai/tulu-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6946
- Rewards/chosen: 0.0316
- Rewards/rejected: 0.0333
- Rewards/accuracies: 0.5195
- Rewards/margins: -0.0018
- Rewards/margins Max: 0.0952
- Rewards/margins Min: -0.1041
- Rewards/margins Std: 0.0646
- Logps/rejected: -316.1527
- Logps/chosen: -330.8240
- Logits/rejected: 0.8900
- Logits/chosen: 0.7447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6506 | 1.0 | 1359 | 0.6946 | 0.0316 | 0.0333 | 0.5195 | -0.0018 | 0.0952 | -0.1041 | 0.0646 | -316.1527 | -330.8240 | 0.8900 | 0.7447 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-7b", "model-index": [{"name": "tulu2-7b-cost-UF-both-5e-7", "results": []}]} | just1nseo/tulu2-7b-cost-UF-both-5e-7 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:allenai/tulu-2-7b",
"region:us"
]
| null | 2024-04-28T17:48:48+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6046
- Bleu: 5.7346
- Gen Len: 17.6051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8659 | 1.0 | 6355 | 1.6287 | 5.5916 | 17.6095 |
| 1.8074 | 2.0 | 12710 | 1.6046 | 5.7346 | 17.6051 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.3.0+cu118
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]} | miguelactc27/my_awesome_opus_books_model | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:49:17+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** richie-ghost
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft", "generated_from_trainer"], "base_model": "unsloth/tinyllama-bnb-4bit"} | richie-ghost/Tinyllama-FT-unsloth-quantized_merged | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"generated_from_trainer",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:50:36+00:00 |
null | null | {"license": "openrail"} | Coolwowsocoolwow/Pizza_Pizza | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-28T17:50:42+00:00 |
|
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - egioia/corgy_reperti_LoRA
<Gallery />
## Model description
These are egioia/corgy_reperti_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use TOK reperti to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](egioia/corgy_reperti_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "TOK reperti", "widget": []} | egioia/corgy_reperti_LoRA | null | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| null | 2024-04-28T17:52:34+00:00 |
null | null | {} | lewan4/Arel_graphic_lecture | null | [
"region:us"
]
| null | 2024-04-28T17:53:40+00:00 |
|
text-generation | transformers |
# Uploaded model
- **Developed by:** richie-ghost
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft", "generated_from_trainer"], "base_model": "unsloth/tinyllama-bnb-4bit"} | richie-ghost/Tinyllama-FT-unsloth-quantized_merge_4Bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:53:42+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0428HMA3
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.645 | 0.09 | 10 | 1.7359 |
| 1.1171 | 0.18 | 20 | 0.4457 |
| 0.2438 | 0.27 | 30 | 0.1612 |
| 0.1568 | 0.36 | 40 | 0.1498 |
| 0.1473 | 0.45 | 50 | 0.1478 |
| 0.1471 | 0.54 | 60 | 0.1482 |
| 0.1545 | 0.63 | 70 | 0.1474 |
| 0.1526 | 0.73 | 80 | 0.1488 |
| 0.1433 | 0.82 | 90 | 0.1479 |
| 0.1452 | 0.91 | 100 | 0.1482 |
| 0.1488 | 1.0 | 110 | 0.1496 |
| 0.1438 | 1.09 | 120 | 0.1489 |
| 0.145 | 1.18 | 130 | 0.1476 |
| 0.1453 | 1.27 | 140 | 0.1467 |
| 0.1482 | 1.36 | 150 | 0.1462 |
| 0.1408 | 1.45 | 160 | 0.1443 |
| 0.1411 | 1.54 | 170 | 0.1384 |
| 0.1312 | 1.63 | 180 | 0.1297 |
| 0.1321 | 1.72 | 190 | 0.1316 |
| 0.1246 | 1.81 | 200 | 0.1237 |
| 0.1232 | 1.9 | 210 | 0.1183 |
| 0.12 | 1.99 | 220 | 0.1173 |
| 0.1099 | 2.08 | 230 | 0.1167 |
| 0.1069 | 2.18 | 240 | 0.1131 |
| 0.1032 | 2.27 | 250 | 0.1125 |
| 0.1063 | 2.36 | 260 | 0.1125 |
| 0.1052 | 2.45 | 270 | 0.1108 |
| 0.1024 | 2.54 | 280 | 0.1087 |
| 0.0945 | 2.63 | 290 | 0.1081 |
| 0.0971 | 2.72 | 300 | 0.1076 |
| 0.103 | 2.81 | 310 | 0.1065 |
| 0.1022 | 2.9 | 320 | 0.1060 |
| 0.1039 | 2.99 | 330 | 0.1059 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA3", "results": []}]} | Litzy619/G0428HMA3 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T17:54:41+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0428HMA2
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7283 | 0.09 | 10 | 1.9180 |
| 1.3514 | 0.18 | 20 | 0.6885 |
| 0.3688 | 0.27 | 30 | 0.1812 |
| 0.1614 | 0.36 | 40 | 0.1524 |
| 0.1477 | 0.45 | 50 | 0.1476 |
| 0.1475 | 0.54 | 60 | 0.1480 |
| 0.1477 | 0.63 | 70 | 0.1475 |
| 0.1481 | 0.73 | 80 | 0.1481 |
| 0.1415 | 0.82 | 90 | 0.1487 |
| 0.1455 | 0.91 | 100 | 0.1473 |
| 0.1484 | 1.0 | 110 | 0.1482 |
| 0.143 | 1.09 | 120 | 0.1482 |
| 0.1441 | 1.18 | 130 | 0.1479 |
| 0.1452 | 1.27 | 140 | 0.1453 |
| 0.1464 | 1.36 | 150 | 0.1433 |
| 0.1394 | 1.45 | 160 | 0.1517 |
| 0.1425 | 1.54 | 170 | 0.1415 |
| 0.1378 | 1.63 | 180 | 0.1336 |
| 0.1322 | 1.72 | 190 | 0.1349 |
| 0.1269 | 1.81 | 200 | 0.1243 |
| 0.1255 | 1.9 | 210 | 0.1209 |
| 0.1212 | 1.99 | 220 | 0.1208 |
| 0.1115 | 2.08 | 230 | 0.1180 |
| 0.1151 | 2.18 | 240 | 0.1169 |
| 0.1089 | 2.27 | 250 | 0.1160 |
| 0.1085 | 2.36 | 260 | 0.1134 |
| 0.1099 | 2.45 | 270 | 0.1118 |
| 0.1031 | 2.54 | 280 | 0.1112 |
| 0.0986 | 2.63 | 290 | 0.1099 |
| 0.1008 | 2.72 | 300 | 0.1091 |
| 0.1075 | 2.81 | 310 | 0.1087 |
| 0.1048 | 2.9 | 320 | 0.1085 |
| 0.1047 | 2.99 | 330 | 0.1085 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA2", "results": []}]} | Litzy619/G0428HMA2 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T17:54:49+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** xiaoliy2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | xiaoliy2/mistral-7b-instruct-ft-formal-1 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:55:00+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** Jogendra0411
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl", "sft"], "base_model": "unsloth/gemma-2b-it-bnb-4bit"} | Jogendra0411/gemmauppie | null | [
"transformers",
"pytorch",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T17:55:18+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0428HMA4
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8227 | 0.09 | 10 | 2.1171 |
| 1.6416 | 0.18 | 20 | 1.0605 |
| 0.6589 | 0.27 | 30 | 0.2594 |
| 0.1907 | 0.36 | 40 | 0.1623 |
| 0.1539 | 0.45 | 50 | 0.1509 |
| 0.1503 | 0.54 | 60 | 0.1492 |
| 0.1479 | 0.63 | 70 | 0.1475 |
| 0.1494 | 0.73 | 80 | 0.1482 |
| 0.1415 | 0.82 | 90 | 0.1490 |
| 0.1453 | 0.91 | 100 | 0.1474 |
| 0.1486 | 1.0 | 110 | 0.1482 |
| 0.1426 | 1.09 | 120 | 0.1473 |
| 0.1437 | 1.18 | 130 | 0.1473 |
| 0.1444 | 1.27 | 140 | 0.1464 |
| 0.1468 | 1.36 | 150 | 0.1456 |
| 0.1422 | 1.45 | 160 | 0.1481 |
| 0.143 | 1.54 | 170 | 0.1451 |
| 0.1426 | 1.63 | 180 | 0.1438 |
| 0.1436 | 1.72 | 190 | 0.1450 |
| 0.1398 | 1.81 | 200 | 0.1374 |
| 0.1353 | 1.9 | 210 | 0.1372 |
| 0.1339 | 1.99 | 220 | 0.1310 |
| 0.1229 | 2.08 | 230 | 0.1288 |
| 0.1229 | 2.18 | 240 | 0.1268 |
| 0.1209 | 2.27 | 250 | 0.1251 |
| 0.1238 | 2.36 | 260 | 0.1220 |
| 0.1223 | 2.45 | 270 | 0.1222 |
| 0.1151 | 2.54 | 280 | 0.1208 |
| 0.1131 | 2.63 | 290 | 0.1182 |
| 0.1129 | 2.72 | 300 | 0.1173 |
| 0.113 | 2.81 | 310 | 0.1168 |
| 0.1162 | 2.9 | 320 | 0.1167 |
| 0.1152 | 2.99 | 330 | 0.1167 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA4", "results": []}]} | Litzy619/G0428HMA4 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T17:55:37+00:00 |
text-generation | transformers |
<img src=https://huggingface.co/lodrick-the-lafted/Olethros-8B/resolve/main/olethros.png>
L3-8b-Instruct tuned on roughly 6000 Opus generations in the hopes of adding a bit of sovl. | {"license": "llama3", "datasets": ["lodrick-the-lafted/OpusStories", "lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K", "lodrick-the-lafted/Samantha-Opus", "lodrick-the-lafted/Worldsim-Opus"]} | blockblockblock/Olethros-8B-bpw5.5-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/OpusStories",
"dataset:lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K",
"dataset:lodrick-the-lafted/Samantha-Opus",
"dataset:lodrick-the-lafted/Worldsim-Opus",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T17:56:04+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0428HMA5
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7283 | 0.09 | 10 | 1.9180 |
| 1.3514 | 0.18 | 20 | 0.6885 |
| 0.3688 | 0.27 | 30 | 0.1812 |
| 0.1614 | 0.36 | 40 | 0.1524 |
| 0.1477 | 0.45 | 50 | 0.1476 |
| 0.1475 | 0.54 | 60 | 0.1480 |
| 0.1477 | 0.63 | 70 | 0.1475 |
| 0.1481 | 0.73 | 80 | 0.1481 |
| 0.1415 | 0.82 | 90 | 0.1487 |
| 0.1455 | 0.91 | 100 | 0.1473 |
| 0.1484 | 1.0 | 110 | 0.1482 |
| 0.143 | 1.09 | 120 | 0.1482 |
| 0.1441 | 1.18 | 130 | 0.1479 |
| 0.1452 | 1.27 | 140 | 0.1453 |
| 0.1464 | 1.36 | 150 | 0.1433 |
| 0.1394 | 1.45 | 160 | 0.1517 |
| 0.1425 | 1.54 | 170 | 0.1415 |
| 0.1378 | 1.63 | 180 | 0.1336 |
| 0.1322 | 1.72 | 190 | 0.1349 |
| 0.1269 | 1.81 | 200 | 0.1243 |
| 0.1255 | 1.9 | 210 | 0.1209 |
| 0.1212 | 1.99 | 220 | 0.1208 |
| 0.1115 | 2.08 | 230 | 0.1180 |
| 0.1151 | 2.18 | 240 | 0.1169 |
| 0.1089 | 2.27 | 250 | 0.1160 |
| 0.1085 | 2.36 | 260 | 0.1134 |
| 0.1099 | 2.45 | 270 | 0.1118 |
| 0.1031 | 2.54 | 280 | 0.1112 |
| 0.0986 | 2.63 | 290 | 0.1099 |
| 0.1008 | 2.72 | 300 | 0.1091 |
| 0.1075 | 2.81 | 310 | 0.1087 |
| 0.1048 | 2.9 | 320 | 0.1085 |
| 0.1047 | 2.99 | 330 | 0.1085 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA5", "results": []}]} | Litzy619/G0428HMA5 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T17:56:06+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0428HMA6
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.645 | 0.09 | 10 | 1.7359 |
| 1.1171 | 0.18 | 20 | 0.4457 |
| 0.2438 | 0.27 | 30 | 0.1612 |
| 0.1568 | 0.36 | 40 | 0.1498 |
| 0.1473 | 0.45 | 50 | 0.1478 |
| 0.1471 | 0.54 | 60 | 0.1482 |
| 0.1545 | 0.63 | 70 | 0.1474 |
| 0.1526 | 0.73 | 80 | 0.1488 |
| 0.1433 | 0.82 | 90 | 0.1479 |
| 0.1452 | 0.91 | 100 | 0.1482 |
| 0.1488 | 1.0 | 110 | 0.1496 |
| 0.1438 | 1.09 | 120 | 0.1489 |
| 0.145 | 1.18 | 130 | 0.1476 |
| 0.1453 | 1.27 | 140 | 0.1467 |
| 0.1482 | 1.36 | 150 | 0.1462 |
| 0.1408 | 1.45 | 160 | 0.1443 |
| 0.1411 | 1.54 | 170 | 0.1384 |
| 0.1312 | 1.63 | 180 | 0.1297 |
| 0.1321 | 1.72 | 190 | 0.1316 |
| 0.1246 | 1.81 | 200 | 0.1237 |
| 0.1232 | 1.9 | 210 | 0.1183 |
| 0.12 | 1.99 | 220 | 0.1173 |
| 0.1099 | 2.08 | 230 | 0.1167 |
| 0.1069 | 2.18 | 240 | 0.1131 |
| 0.1032 | 2.27 | 250 | 0.1125 |
| 0.1063 | 2.36 | 260 | 0.1125 |
| 0.1052 | 2.45 | 270 | 0.1108 |
| 0.1024 | 2.54 | 280 | 0.1087 |
| 0.0945 | 2.63 | 290 | 0.1081 |
| 0.0971 | 2.72 | 300 | 0.1076 |
| 0.103 | 2.81 | 310 | 0.1065 |
| 0.1022 | 2.9 | 320 | 0.1060 |
| 0.1039 | 2.99 | 330 | 0.1059 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA6", "results": []}]} | Litzy619/G0428HMA6 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T17:56:22+00:00 |
null | null | {} | Platino/Fiona.Mueller | null | [
"region:us"
]
| null | 2024-04-28T17:56:35+00:00 |
|
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Anirudh Sriram, Vishwa Akkati, Nitin Kanchi, Arnav Cherukuthota
- **Model type:** Mistral 7B Instruction Fine Tuned on custom dataset
- **Language(s) (NLP):** English
- **License:** MIT License
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.1
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/VishFish/Mistral-7B-Instruct-Echo-FC
- **Demo [optional]:** https://youtu.be/7VzgsMyVVM4
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Social Media for the Visually Impaired
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Limited to only the echo app
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** GCP T4
- **Hours used:** 1 hour
- **Cloud Provider:** Intel Developer Cloud
- **Compute Region:** US-West
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.1"} | VishFish/Mistral-7B-Instruct-Echo-FC | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
]
| null | 2024-04-28T17:57:09+00:00 |
null | null | {} | Platino/Fiona-Mueller | null | [
"region:us"
]
| null | 2024-04-28T17:57:18+00:00 |
|
null | null | {} | NerdyCivilian/BitTensorSubnet25HK7 | null | [
"region:us"
]
| null | 2024-04-28T17:58:00+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v1", "results": []}]} | yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:00:17+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tulu2-7b-cost-UI-5e-7
This model is a fine-tuned version of [allenai/tulu-2-7b](https://huggingface.co/allenai/tulu-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6914
- Rewards/chosen: -0.0221
- Rewards/rejected: -0.0257
- Rewards/accuracies: 0.5820
- Rewards/margins: 0.0037
- Rewards/margins Max: 0.0390
- Rewards/margins Min: -0.0317
- Rewards/margins Std: 0.0230
- Logps/rejected: -322.0583
- Logps/chosen: -336.1845
- Logits/rejected: 0.8742
- Logits/chosen: 0.7281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6522 | 1.0 | 1069 | 0.6914 | -0.0221 | -0.0257 | 0.5820 | 0.0037 | 0.0390 | -0.0317 | 0.0230 | -322.0583 | -336.1845 | 0.8742 | 0.7281 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-7b", "model-index": [{"name": "tulu2-7b-cost-UI-5e-7", "results": []}]} | just1nseo/tulu2-7b-cost-UI-5e-7 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:allenai/tulu-2-7b",
"region:us"
]
| null | 2024-04-28T18:00:43+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-finetuned-eurosat
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6607
- Accuracy: 0.8105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.115 | 0.9362 | 11 | 1.0397 | 0.6526 |
| 0.8536 | 1.9574 | 23 | 0.7698 | 0.7579 |
| 0.5677 | 2.9787 | 35 | 0.7200 | 0.7895 |
| 0.419 | 4.0 | 47 | 0.7286 | 0.7842 |
| 0.3365 | 4.9362 | 58 | 0.6607 | 0.8105 |
| 0.2317 | 5.6170 | 66 | 0.6649 | 0.8 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "facebook/vit-msn-small", "model-index": [{"name": "vit-msn-small-finetuned-eurosat", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8105263157894737, "name": "Accuracy"}]}]}]} | pk3388/vit-msn-small-finetuned-eurosat | null | [
"transformers",
"tensorboard",
"safetensors",
"vit_msn",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/vit-msn-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:02:00+00:00 |
null | null | {} | SaimaAyub/bert-base-cased-finetuned-wikitext_2 | null | [
"region:us"
]
| null | 2024-04-28T18:02:04+00:00 |
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0428HMA7
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9118 | 0.09 | 10 | 2.3467 |
| 1.9015 | 0.18 | 20 | 1.3467 |
| 0.9442 | 0.27 | 30 | 0.4489 |
| 0.2728 | 0.36 | 40 | 0.1710 |
| 0.1594 | 0.45 | 50 | 0.1534 |
| 0.1503 | 0.54 | 60 | 0.1509 |
| 0.1483 | 0.63 | 70 | 0.1480 |
| 0.1495 | 0.73 | 80 | 0.1479 |
| 0.1411 | 0.82 | 90 | 0.1498 |
| 0.145 | 0.91 | 100 | 0.1482 |
| 0.1483 | 1.0 | 110 | 0.1488 |
| 0.143 | 1.09 | 120 | 0.1474 |
| 0.1443 | 1.18 | 130 | 0.1482 |
| 0.1446 | 1.27 | 140 | 0.1469 |
| 0.1468 | 1.36 | 150 | 0.1456 |
| 0.1409 | 1.45 | 160 | 0.1483 |
| 0.1441 | 1.54 | 170 | 0.1445 |
| 0.1431 | 1.63 | 180 | 0.1406 |
| 0.1415 | 1.72 | 190 | 0.1392 |
| 0.1321 | 1.81 | 200 | 0.1345 |
| 0.1345 | 1.9 | 210 | 0.1284 |
| 0.1298 | 1.99 | 220 | 0.1282 |
| 0.1215 | 2.08 | 230 | 0.1256 |
| 0.1201 | 2.18 | 240 | 0.1231 |
| 0.1167 | 2.27 | 250 | 0.1216 |
| 0.1202 | 2.36 | 260 | 0.1193 |
| 0.1203 | 2.45 | 270 | 0.1193 |
| 0.1128 | 2.54 | 280 | 0.1190 |
| 0.1103 | 2.63 | 290 | 0.1168 |
| 0.1094 | 2.72 | 300 | 0.1149 |
| 0.1118 | 2.81 | 310 | 0.1146 |
| 0.1147 | 2.9 | 320 | 0.1147 |
| 0.1139 | 2.99 | 330 | 0.1147 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA7", "results": []}]} | Litzy619/G0428HMA7 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T18:02:19+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/912pavq | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:03:13+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** KingNish
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | KingNish/Codellama3-8b | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:03:52+00:00 |
null | null | {} | hibalaz/nlp2 | null | [
"safetensors",
"region:us"
]
| null | 2024-04-28T18:04:31+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-named-entity-recognition-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0654
- Precision: 0.9360
- Recall: 0.9498
- F1: 0.9429
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0727 | 1.0 | 1756 | 0.0650 | 0.9127 | 0.9372 | 0.9248 | 0.9826 |
| 0.0346 | 2.0 | 3512 | 0.0662 | 0.9329 | 0.9446 | 0.9387 | 0.9853 |
| 0.0216 | 3.0 | 5268 | 0.0654 | 0.9360 | 0.9498 | 0.9429 | 0.9861 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-named-entity-recognition-ner", "results": []}]} | MANMEET75/bert-finetuned-named-entity-recognition-ner | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:04:31+00:00 |
text-classification | transformers |
# FinanceBERT
FinanceBERT is a transformer-based model specifically fine-tuned for sentiment analysis in the financial sector. It's designed to assess sentiments expressed in financial texts, aiding stakeholders in making data-driven financial decisions.
## Model Description
FinanceBERT uses the BERT architecture, renowned for its deep contextual understanding. This model helps analyze sentiments in financial news articles, reports, and social media content, categorizing them into positive, negative, or neutral sentiments.
## How to Use
To use FinanceBERT, you can load it with the Transformers library:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained('marcev/financebert')
model = AutoModelForSequenceClassification.from_pretrained('marcev/financebert')
def predict(text):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
return predictions
text = "Your sample text here."
predict(text))
```
# Examples
Try out these examples to see FinanceBert in action:
examples:
- text: "The company's financial performance exceeded expectations this quarter."
- text: "There are concerns that the recent scandal could lead to a decrease in shareholder trust."
# Evaluation Results
FinanceBERT was evaluated on a held-out test set and achieved the following performance metrics:
- Accuracy: 92%
- F1-Score (Weighted): 92%
- Evaluation Loss: 0.320
# Detailed Performance Metrics
Classification Report:
Negative Sentiment - class_index: 0
- precision: 0.84
- recall: 0.90
- f1_score: 0.87
- support: 29
Neutral Sentiment - class_index: 1
- precision: 0.94
- recall: 0.94
- f1_score: 0.94
- support: 199
Positive Setniment - class_index: 2
- precision: 0.90
- recall: 0.88
- f1_score: 0.89
- support: 83
Confusion Matrix:
| Predicted | Negative | Neutral | Positive |
|-----------------|----------|---------|----------|
| Actual Negative | 26 | 2 | 1 |
| Actual Neutral | 4 | 188 | 7 |
| Actual Positive | 1 | 9 | 73 |
# Limitations
FinanceBERT has been rigorously trained and tested to ensure reliable performance across a variety of financial texts. However, there are several limitations to consider:
- Domain Specificity: Optimized for financial contexts, may not perform well on non-financial texts.
- Language Support: Currently supports English only.
- Data Bias: Reflects the bias inherent in its training data, which may not include diverse global financial perspectives.
- Interpretability: As a deep learning model, it does not offer easy interpretability of its decision-making process.
# License
This model is released under the GNU General Public License v3.0 (GPL-3.0), requiring that modifications and derivatives remain open source under the same license.
# Acknowledgements
FinanceBERT was developed using the Transformers library by Hugging Face, trained on a curated dataset of financial texts.
| {"language": ["en"], "license": "gpl-3.0", "library_name": "transformers", "tags": ["bert", "transformers", "sentiment-analysis", "finance", "english", "text-classification"], "datasets": ["financial_phrasebank"], "metrics": [{"accuracy": 0.92}, {"f1": 0.92}]} | marcev/financebert | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"finance",
"english",
"en",
"dataset:financial_phrasebank",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:04:40+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0428HMA10
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9118 | 0.09 | 10 | 2.3467 |
| 1.9015 | 0.18 | 20 | 1.3467 |
| 0.9442 | 0.27 | 30 | 0.4489 |
| 0.2728 | 0.36 | 40 | 0.1710 |
| 0.1594 | 0.45 | 50 | 0.1534 |
| 0.1503 | 0.54 | 60 | 0.1509 |
| 0.1483 | 0.63 | 70 | 0.1480 |
| 0.1495 | 0.73 | 80 | 0.1479 |
| 0.1411 | 0.82 | 90 | 0.1498 |
| 0.145 | 0.91 | 100 | 0.1482 |
| 0.1483 | 1.0 | 110 | 0.1488 |
| 0.143 | 1.09 | 120 | 0.1474 |
| 0.1443 | 1.18 | 130 | 0.1482 |
| 0.1446 | 1.27 | 140 | 0.1469 |
| 0.1468 | 1.36 | 150 | 0.1456 |
| 0.1409 | 1.45 | 160 | 0.1483 |
| 0.1441 | 1.54 | 170 | 0.1445 |
| 0.1431 | 1.63 | 180 | 0.1406 |
| 0.1415 | 1.72 | 190 | 0.1392 |
| 0.1321 | 1.81 | 200 | 0.1345 |
| 0.1345 | 1.9 | 210 | 0.1284 |
| 0.1298 | 1.99 | 220 | 0.1282 |
| 0.1215 | 2.08 | 230 | 0.1256 |
| 0.1201 | 2.18 | 240 | 0.1231 |
| 0.1167 | 2.27 | 250 | 0.1216 |
| 0.1202 | 2.36 | 260 | 0.1193 |
| 0.1203 | 2.45 | 270 | 0.1193 |
| 0.1128 | 2.54 | 280 | 0.1190 |
| 0.1103 | 2.63 | 290 | 0.1168 |
| 0.1094 | 2.72 | 300 | 0.1149 |
| 0.1118 | 2.81 | 310 | 0.1146 |
| 0.1147 | 2.9 | 320 | 0.1147 |
| 0.1139 | 2.99 | 330 | 0.1147 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA10", "results": []}]} | Litzy619/G0428HMA10 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T18:05:02+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/9h1i7uy | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:05:34+00:00 |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | LeapyDeapy/whisper-small-healv1-lingo | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:06:34+00:00 |
token-classification | transformers | {} | AliSaadatV/esm2_t12_35M_UR50D-finetuned-DOMAIN_earlystop_70_15_15 | null | [
"transformers",
"tensorboard",
"safetensors",
"esm",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:08:01+00:00 |
|
null | null | {"license": "apache-2.0"} | Graca21/G | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-04-28T18:09:57+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/tg0x42j | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:10:00+00:00 |
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5_TTS_Dutch_v2
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the procit001 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.8592 | 2.1277 | 100 | 0.7767 |
| 0.7352 | 4.2553 | 200 | 0.6372 |
| 0.6856 | 6.3830 | 300 | 0.6163 |
| 0.6503 | 8.5106 | 400 | 0.6015 |
| 0.6289 | 10.6383 | 500 | 0.5910 |
| 0.6246 | 12.7660 | 600 | 0.5858 |
| 0.6252 | 14.8936 | 700 | 0.5778 |
| 0.6263 | 17.0213 | 800 | 0.5769 |
| 0.6314 | 19.1489 | 900 | 0.5767 |
| 0.6266 | 21.2766 | 1000 | 0.5736 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["nl"], "license": "mit", "tags": ["dutch", "generated_from_trainer"], "datasets": ["procit001/clean_female_dutch_voice_v2"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "SpeechT5_TTS_Dutch_v2", "results": []}]} | procit001/speecht5_tts_nl | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"dutch",
"generated_from_trainer",
"nl",
"dataset:procit001/clean_female_dutch_voice_v2",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:10:21+00:00 |
text-generation | transformers | {} | sandersonsa/llama-2-7b-miniguanaco | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:10:39+00:00 |
|
null | null | {} | Filipeqe/123 | null | [
"region:us"
]
| null | 2024-04-28T18:10:42+00:00 |
|
text-generation | transformers |
<img src=https://huggingface.co/lodrick-the-lafted/Olethros-8B/resolve/main/olethros.png>
L3-8b-Instruct tuned on roughly 6000 Opus generations in the hopes of adding a bit of sovl. | {"license": "llama3", "datasets": ["lodrick-the-lafted/OpusStories", "lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K", "lodrick-the-lafted/Samantha-Opus", "lodrick-the-lafted/Worldsim-Opus"]} | blockblockblock/Olethros-8B-bpw6-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/OpusStories",
"dataset:lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K",
"dataset:lodrick-the-lafted/Samantha-Opus",
"dataset:lodrick-the-lafted/Worldsim-Opus",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
]
| null | 2024-04-28T18:11:29+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Eric-Lan/stack-llama-2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:11:58+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0428HMA8
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7848 | 0.09 | 10 | 2.0338 |
| 1.5344 | 0.18 | 20 | 0.9449 |
| 0.5532 | 0.27 | 30 | 0.2231 |
| 0.1757 | 0.36 | 40 | 0.1577 |
| 0.151 | 0.45 | 50 | 0.1493 |
| 0.149 | 0.54 | 60 | 0.1492 |
| 0.1476 | 0.63 | 70 | 0.1472 |
| 0.1488 | 0.73 | 80 | 0.1479 |
| 0.1416 | 0.82 | 90 | 0.1485 |
| 0.1452 | 0.91 | 100 | 0.1475 |
| 0.1484 | 1.0 | 110 | 0.1486 |
| 0.1431 | 1.09 | 120 | 0.1476 |
| 0.1447 | 1.18 | 130 | 0.1481 |
| 0.1451 | 1.27 | 140 | 0.1469 |
| 0.1474 | 1.36 | 150 | 0.1455 |
| 0.1417 | 1.45 | 160 | 0.1463 |
| 0.1428 | 1.54 | 170 | 0.1426 |
| 0.1406 | 1.63 | 180 | 0.1370 |
| 0.1392 | 1.72 | 190 | 0.1435 |
| 0.1355 | 1.81 | 200 | 0.1343 |
| 0.1343 | 1.9 | 210 | 0.1318 |
| 0.1297 | 1.99 | 220 | 0.1237 |
| 0.1205 | 2.08 | 230 | 0.1239 |
| 0.1161 | 2.18 | 240 | 0.1210 |
| 0.1139 | 2.27 | 250 | 0.1177 |
| 0.1159 | 2.36 | 260 | 0.1159 |
| 0.1165 | 2.45 | 270 | 0.1150 |
| 0.111 | 2.54 | 280 | 0.1146 |
| 0.1049 | 2.63 | 290 | 0.1129 |
| 0.1055 | 2.72 | 300 | 0.1116 |
| 0.1108 | 2.81 | 310 | 0.1112 |
| 0.1117 | 2.9 | 320 | 0.1109 |
| 0.1116 | 2.99 | 330 | 0.1108 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA8", "results": []}]} | Litzy619/G0428HMA8 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T18:12:25+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0428HMA9
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7107 | 0.09 | 10 | 1.8639 |
| 1.2972 | 0.18 | 20 | 0.6487 |
| 0.3487 | 0.27 | 30 | 0.1841 |
| 0.1607 | 0.36 | 40 | 0.1546 |
| 0.1485 | 0.45 | 50 | 0.1486 |
| 0.1502 | 0.54 | 60 | 0.1479 |
| 0.1489 | 0.63 | 70 | 0.1473 |
| 0.1499 | 0.73 | 80 | 0.1478 |
| 0.1422 | 0.82 | 90 | 0.1468 |
| 0.1456 | 0.91 | 100 | 0.1473 |
| 0.1488 | 1.0 | 110 | 0.1490 |
| 0.1431 | 1.09 | 120 | 0.1472 |
| 0.1431 | 1.18 | 130 | 0.1476 |
| 0.1439 | 1.27 | 140 | 0.1411 |
| 0.1413 | 1.36 | 150 | 0.1333 |
| 0.1335 | 1.45 | 160 | 0.1405 |
| 0.1356 | 1.54 | 170 | 0.1308 |
| 0.1266 | 1.63 | 180 | 0.1265 |
| 0.124 | 1.72 | 190 | 0.1253 |
| 0.1202 | 1.81 | 200 | 0.1205 |
| 0.1211 | 1.9 | 210 | 0.1202 |
| 0.1218 | 1.99 | 220 | 0.1167 |
| 0.107 | 2.08 | 230 | 0.1134 |
| 0.1026 | 2.18 | 240 | 0.1116 |
| 0.1024 | 2.27 | 250 | 0.1094 |
| 0.1036 | 2.36 | 260 | 0.1076 |
| 0.1026 | 2.45 | 270 | 0.1052 |
| 0.099 | 2.54 | 280 | 0.1045 |
| 0.0891 | 2.63 | 290 | 0.1047 |
| 0.0949 | 2.72 | 300 | 0.1042 |
| 0.0974 | 2.81 | 310 | 0.1031 |
| 0.0992 | 2.9 | 320 | 0.1028 |
| 0.1024 | 2.99 | 330 | 0.1027 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA9", "results": []}]} | Litzy619/G0428HMA9 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T18:12:25+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-dolly-qa
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1480
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.0.post0+cxx11.abi
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemma-2b-dolly-qa", "results": []}]} | apfurman/gemma-2b-dolly-qa | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
]
| null | 2024-04-28T18:12:37+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tulu2-7b-cost-UF-UI-5e-7
This model is a fine-tuned version of [allenai/tulu-2-7b](https://huggingface.co/allenai/tulu-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6930
- Rewards/chosen: 0.0111
- Rewards/rejected: 0.0080
- Rewards/accuracies: 0.5405
- Rewards/margins: 0.0031
- Rewards/margins Max: 0.0923
- Rewards/margins Min: -0.0946
- Rewards/margins Std: 0.0609
- Logps/rejected: -318.2894
- Logps/chosen: -337.2036
- Logits/rejected: 0.9251
- Logits/chosen: 0.7522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6467 | 1.0 | 2428 | 0.6930 | 0.0111 | 0.0080 | 0.5405 | 0.0031 | 0.0923 | -0.0946 | 0.0609 | -318.2894 | -337.2036 | 0.9251 | 0.7522 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-7b", "model-index": [{"name": "tulu2-7b-cost-UF-UI-5e-7", "results": []}]} | just1nseo/tulu2-7b-cost-UF-UI-5e-7 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:allenai/tulu-2-7b",
"region:us"
]
| null | 2024-04-28T18:15:03+00:00 |
text-generation | transformers |
# Phi-3 Mini-128K-Instruct ONNX model for onnxruntime-web
This is the same models as the [official phi3 onnx model](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx) with a few changes to make it work for onnxruntime-web:
1. the model is fp16 with int4 block quantization for weights
2. the 'logits' output is fp32
3. the model uses MHA instead of GQA
4. onnx and external data file need to stay below 2GB to be cacheable in chromium
| {"license": "mit", "tags": ["ONNX", "DML", "ONNXRuntime", "phi3", "nlp", "conversational", "custom_code"], "pipeline_tag": "text-generation"} | schmuell/phi3-int4 | null | [
"transformers",
"onnx",
"mistral",
"text-generation",
"ONNX",
"DML",
"ONNXRuntime",
"phi3",
"nlp",
"conversational",
"custom_code",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:15:11+00:00 |
null | null | {} | SilasModder/testmod042824 | null | [
"region:us"
]
| null | 2024-04-28T18:15:26+00:00 |
|
reinforcement-learning | ml-agents |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vicha-w/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
| {"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]} | vicha-w/ppo-SnowballTarget | null | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| null | 2024-04-28T18:15:27+00:00 |
text-generation | transformers | {} | Weni/WeniGPT-Agents-Llama3-5.0.10-SFT-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-28T18:15:35+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | PurCL/codeart-3m | null | [
"transformers",
"safetensors",
"codeart",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:15:58+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | PurCL/codeart-3m-max_trans_closure_4 | null | [
"transformers",
"safetensors",
"codeart",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:18:15+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | PurCL/codeart-3m-max_trans_closure_6 | null | [
"transformers",
"safetensors",
"codeart",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:18:36+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | PurCL/codeart-3m-wo_local_mask | null | [
"transformers",
"safetensors",
"codeart",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:18:57+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | PurCL/codeart-3m-wo_rel_pos_bias | null | [
"transformers",
"safetensors",
"codeart",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:19:17+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tulu2-7b-cost-UF-UI-HHRLHF-5e-7
This model is a fine-tuned version of [allenai/tulu-2-7b](https://huggingface.co/allenai/tulu-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6879
- Rewards/chosen: -0.0447
- Rewards/rejected: -0.0566
- Rewards/accuracies: 0.5810
- Rewards/margins: 0.0120
- Rewards/margins Max: 0.1068
- Rewards/margins Min: -0.0804
- Rewards/margins Std: 0.0620
- Logps/rejected: -324.0695
- Logps/chosen: -341.4869
- Logits/rejected: 0.8995
- Logits/chosen: 0.7481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6327 | 1.0 | 3974 | 0.6879 | -0.0447 | -0.0566 | 0.5810 | 0.0120 | 0.1068 | -0.0804 | 0.0620 | -324.0695 | -341.4869 | 0.8995 | 0.7481 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-7b", "model-index": [{"name": "tulu2-7b-cost-UF-UI-HHRLHF-5e-7", "results": []}]} | just1nseo/tulu2-7b-cost-UF-UI-HHRLHF-5e-7 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:allenai/tulu-2-7b",
"region:us"
]
| null | 2024-04-28T18:19:37+00:00 |
null | null |
# Learning Huggingface
* Created a model
* Created a space
* Created a yaml inside README
| {"language": ["en", "ko"], "license": "mit", "tags": ["demo", "tayaee"], "datasets": ["dataset1", "dataset2"], "metrics": ["metric1", "metric2"], "thumbnail": "url to a thumbnail used in social sharing", "base_model": "meta-llama/Meta-Llama-3-8B"} | tayaee/demo1 | null | [
"demo",
"tayaee",
"en",
"ko",
"dataset:dataset1",
"dataset:dataset2",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:mit",
"region:us"
]
| null | 2024-04-28T18:20:22+00:00 |
text-generation | transformers | {} | SwastikN/sxc_chem_llm | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:20:28+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | PurCL/codeart-3m-wo_trans_closure | null | [
"transformers",
"safetensors",
"rabert",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:20:48+00:00 |
text-generation | transformers | datasets:
- qiaojin/PubMedQA
- kroshan/BioASQ
language:
- en
library_name: transformers
pipeline_tag: table-question-answering
tags:
- chemistry
- biology
- molecular
- synthetic
- language model
Description:
This model is an example of how a fine-tuned LLM even without the full depth, size, and complexity of larger and more expensive models can be useful in context-sensitive situations. In our use-case, we are applying this LLM as part of a broader electronic lab notebook software setup for molecular and computational biologists. This GPT-2 has been finetuned on datasets from BioASQ and PubMedQA and is now knowledgeable enough in biochemistry to assist scientists and integrates as not just a copilot-like tool but also as a lab partner to the overall Design-Built-Test-Learn workflow ever growing in prominence in synthetic biology.
Intel Optimization Inference Code Sample:
We made use of both the BF16 datatype and INT8 quantization to improve performance. BF16 halves the memory compared to FP32, allowing larger models and/or larger batches to fit into memory. Moreover, BF16 is supported by modern Intel CPUs and operations with it are optimized. Quantizing models to INT8 can reduce the model size, making better use of cache and speeding up load times. Additionally, we then optimized further with OpenVino to make it run better on Intel Hardware by converting it to an onxx model to then OpenVINO Intermediate Representation
from openvino.runtime import Core
import numpy as np
# Initialize the OpenVINO runtime Core
ie = Core()
# Load and compile the model for the CPU device
compiled_model = ie.compile_model(model='../ovc_output/converted_model.xml', device_name="CPU")
# Prepare input: a non tokenized example just for examples sake
input_ids = np.random.randint(0, 50256, (1, 10))
# Create a dictionary for the inputs expected by the model
inputs = {"input_ids": input_ids}
# Create an infer request and start synchronous inference
result = compiled_model.create_infer_request().infer(inputs=inputs)
# Access output tensor data directly from the result using the appropriate output key
output = result['outputs']
print("Inference results:", output)
In the finetuning file you will see our other optimizations.
We perform BF16 conversion as follows (we also implement a custom collator):
model = GPT2LMHeadModel.from_pretrained('gpt2-medium').to(torch.bfloat16)
We perform Int8 quantization as follows:
# Load the full-precision model
model.eval() # Ensure the model is in evaluation mode
quantized_model = quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8) | {"tags": ["4th gen xeon"]} | pikhan/gpt2-medium-biochem-bioasq-pubmedqa-demo | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"4th gen xeon",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:21:32+00:00 |
image-segmentation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-p142-cvat-vgs
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the vigneshgs7/segformer_open_cv_RGB_L_0_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0131
- Mean Iou: 0.4961
- Mean Accuracy: 0.9922
- Overall Accuracy: 0.9922
- Accuracy Background: nan
- Accuracy Object: 0.9922
- Iou Background: 0.0
- Iou Object: 0.9922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Object | Iou Background | Iou Object |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:---------------:|:--------------:|:----------:|
| 0.2847 | 0.06 | 20 | 0.3843 | 0.4662 | 0.9324 | 0.9324 | nan | 0.9324 | 0.0 | 0.9324 |
| 0.1681 | 0.11 | 40 | 0.1983 | 0.4704 | 0.9408 | 0.9408 | nan | 0.9408 | 0.0 | 0.9408 |
| 0.1592 | 0.17 | 60 | 0.1303 | 0.4745 | 0.9489 | 0.9489 | nan | 0.9489 | 0.0 | 0.9489 |
| 0.1177 | 0.23 | 80 | 0.0922 | 0.4944 | 0.9888 | 0.9888 | nan | 0.9888 | 0.0 | 0.9888 |
| 0.062 | 0.29 | 100 | 0.0745 | 0.4946 | 0.9892 | 0.9892 | nan | 0.9892 | 0.0 | 0.9892 |
| 0.0767 | 0.34 | 120 | 0.0545 | 0.4852 | 0.9703 | 0.9703 | nan | 0.9703 | 0.0 | 0.9703 |
| 0.0984 | 0.4 | 140 | 0.0621 | 0.4938 | 0.9875 | 0.9875 | nan | 0.9875 | 0.0 | 0.9875 |
| 0.1779 | 0.46 | 160 | 0.0504 | 0.4961 | 0.9921 | 0.9921 | nan | 0.9921 | 0.0 | 0.9921 |
| 0.0468 | 0.52 | 180 | 0.0407 | 0.4904 | 0.9807 | 0.9807 | nan | 0.9807 | 0.0 | 0.9807 |
| 0.0618 | 0.57 | 200 | 0.0390 | 0.4936 | 0.9873 | 0.9873 | nan | 0.9873 | 0.0 | 0.9873 |
| 0.062 | 0.63 | 220 | 0.0348 | 0.4947 | 0.9894 | 0.9894 | nan | 0.9894 | 0.0 | 0.9894 |
| 0.0357 | 0.69 | 240 | 0.0341 | 0.4914 | 0.9828 | 0.9828 | nan | 0.9828 | 0.0 | 0.9828 |
| 0.0304 | 0.74 | 260 | 0.0351 | 0.4960 | 0.9920 | 0.9920 | nan | 0.9920 | 0.0 | 0.9920 |
| 0.0267 | 0.8 | 280 | 0.0311 | 0.4938 | 0.9877 | 0.9877 | nan | 0.9877 | 0.0 | 0.9877 |
| 0.0536 | 0.86 | 300 | 0.0282 | 0.4904 | 0.9807 | 0.9807 | nan | 0.9807 | 0.0 | 0.9807 |
| 0.049 | 0.92 | 320 | 0.0274 | 0.4928 | 0.9855 | 0.9855 | nan | 0.9855 | 0.0 | 0.9855 |
| 0.0304 | 0.97 | 340 | 0.0262 | 0.4936 | 0.9872 | 0.9872 | nan | 0.9872 | 0.0 | 0.9872 |
| 0.0232 | 1.03 | 360 | 0.0251 | 0.4923 | 0.9847 | 0.9847 | nan | 0.9847 | 0.0 | 0.9847 |
| 0.0304 | 1.09 | 380 | 0.0240 | 0.4917 | 0.9835 | 0.9835 | nan | 0.9835 | 0.0 | 0.9835 |
| 0.0451 | 1.15 | 400 | 0.0261 | 0.4964 | 0.9927 | 0.9927 | nan | 0.9927 | 0.0 | 0.9927 |
| 0.0254 | 1.2 | 420 | 0.0234 | 0.4929 | 0.9859 | 0.9859 | nan | 0.9859 | 0.0 | 0.9859 |
| 0.0354 | 1.26 | 440 | 0.0229 | 0.4931 | 0.9861 | 0.9861 | nan | 0.9861 | 0.0 | 0.9861 |
| 0.2103 | 1.32 | 460 | 0.0224 | 0.4951 | 0.9902 | 0.9902 | nan | 0.9902 | 0.0 | 0.9902 |
| 0.041 | 1.38 | 480 | 0.0222 | 0.4920 | 0.9839 | 0.9839 | nan | 0.9839 | 0.0 | 0.9839 |
| 0.0297 | 1.43 | 500 | 0.0223 | 0.4950 | 0.9900 | 0.9900 | nan | 0.9900 | 0.0 | 0.9900 |
| 0.0299 | 1.49 | 520 | 0.0227 | 0.4961 | 0.9923 | 0.9923 | nan | 0.9923 | 0.0 | 0.9923 |
| 0.0213 | 1.55 | 540 | 0.0209 | 0.4947 | 0.9895 | 0.9895 | nan | 0.9895 | 0.0 | 0.9895 |
| 0.0269 | 1.6 | 560 | 0.0214 | 0.4909 | 0.9817 | 0.9817 | nan | 0.9817 | 0.0 | 0.9817 |
| 0.2199 | 1.66 | 580 | 0.0216 | 0.4956 | 0.9912 | 0.9912 | nan | 0.9912 | 0.0 | 0.9912 |
| 0.0191 | 1.72 | 600 | 0.0208 | 0.4935 | 0.9869 | 0.9869 | nan | 0.9869 | 0.0 | 0.9869 |
| 0.0265 | 1.78 | 620 | 0.0201 | 0.4941 | 0.9882 | 0.9882 | nan | 0.9882 | 0.0 | 0.9882 |
| 0.0244 | 1.83 | 640 | 0.0213 | 0.4910 | 0.9820 | 0.9820 | nan | 0.9820 | 0.0 | 0.9820 |
| 0.0172 | 1.89 | 660 | 0.0199 | 0.4929 | 0.9858 | 0.9858 | nan | 0.9858 | 0.0 | 0.9858 |
| 0.0339 | 1.95 | 680 | 0.0190 | 0.4930 | 0.9859 | 0.9859 | nan | 0.9859 | 0.0 | 0.9859 |
| 0.027 | 2.01 | 700 | 0.0192 | 0.4953 | 0.9906 | 0.9906 | nan | 0.9906 | 0.0 | 0.9906 |
| 0.0221 | 2.06 | 720 | 0.0195 | 0.4915 | 0.9830 | 0.9830 | nan | 0.9830 | 0.0 | 0.9830 |
| 0.0461 | 2.12 | 740 | 0.0188 | 0.4953 | 0.9905 | 0.9905 | nan | 0.9905 | 0.0 | 0.9905 |
| 0.0444 | 2.18 | 760 | 0.0189 | 0.4957 | 0.9914 | 0.9914 | nan | 0.9914 | 0.0 | 0.9914 |
| 0.0211 | 2.23 | 780 | 0.0184 | 0.4949 | 0.9898 | 0.9898 | nan | 0.9898 | 0.0 | 0.9898 |
| 0.0221 | 2.29 | 800 | 0.0186 | 0.4963 | 0.9925 | 0.9925 | nan | 0.9925 | 0.0 | 0.9925 |
| 0.0165 | 2.35 | 820 | 0.0181 | 0.4942 | 0.9883 | 0.9883 | nan | 0.9883 | 0.0 | 0.9883 |
| 0.0171 | 2.41 | 840 | 0.0181 | 0.4923 | 0.9846 | 0.9846 | nan | 0.9846 | 0.0 | 0.9846 |
| 0.0202 | 2.46 | 860 | 0.0178 | 0.4958 | 0.9915 | 0.9915 | nan | 0.9915 | 0.0 | 0.9915 |
| 0.0222 | 2.52 | 880 | 0.0178 | 0.4922 | 0.9844 | 0.9844 | nan | 0.9844 | 0.0 | 0.9844 |
| 0.018 | 2.58 | 900 | 0.0162 | 0.4949 | 0.9898 | 0.9898 | nan | 0.9898 | 0.0 | 0.9898 |
| 0.0288 | 2.64 | 920 | 0.0168 | 0.4943 | 0.9887 | 0.9887 | nan | 0.9887 | 0.0 | 0.9887 |
| 0.016 | 2.69 | 940 | 0.0178 | 0.4968 | 0.9936 | 0.9936 | nan | 0.9936 | 0.0 | 0.9936 |
| 0.0184 | 2.75 | 960 | 0.0172 | 0.4935 | 0.9870 | 0.9870 | nan | 0.9870 | 0.0 | 0.9870 |
| 0.0172 | 2.81 | 980 | 0.0175 | 0.4950 | 0.9900 | 0.9900 | nan | 0.9900 | 0.0 | 0.9900 |
| 0.0168 | 2.87 | 1000 | 0.0172 | 0.4951 | 0.9902 | 0.9902 | nan | 0.9902 | 0.0 | 0.9902 |
| 0.0197 | 2.92 | 1020 | 0.0169 | 0.4961 | 0.9923 | 0.9923 | nan | 0.9923 | 0.0 | 0.9923 |
| 0.0177 | 2.98 | 1040 | 0.0170 | 0.4961 | 0.9922 | 0.9922 | nan | 0.9922 | 0.0 | 0.9922 |
| 0.0377 | 3.04 | 1060 | 0.0163 | 0.4944 | 0.9888 | 0.9888 | nan | 0.9888 | 0.0 | 0.9888 |
| 0.0168 | 3.09 | 1080 | 0.0162 | 0.4953 | 0.9906 | 0.9906 | nan | 0.9906 | 0.0 | 0.9906 |
| 0.0167 | 3.15 | 1100 | 0.0166 | 0.4961 | 0.9922 | 0.9922 | nan | 0.9922 | 0.0 | 0.9922 |
| 0.0213 | 3.21 | 1120 | 0.0164 | 0.4948 | 0.9895 | 0.9895 | nan | 0.9895 | 0.0 | 0.9895 |
| 0.0195 | 3.27 | 1140 | 0.0162 | 0.4947 | 0.9894 | 0.9894 | nan | 0.9894 | 0.0 | 0.9894 |
| 0.014 | 3.32 | 1160 | 0.0160 | 0.4950 | 0.9900 | 0.9900 | nan | 0.9900 | 0.0 | 0.9900 |
| 0.0221 | 3.38 | 1180 | 0.0164 | 0.4961 | 0.9922 | 0.9922 | nan | 0.9922 | 0.0 | 0.9922 |
| 0.0162 | 3.44 | 1200 | 0.0159 | 0.4945 | 0.9890 | 0.9890 | nan | 0.9890 | 0.0 | 0.9890 |
| 0.0153 | 3.5 | 1220 | 0.0152 | 0.4957 | 0.9914 | 0.9914 | nan | 0.9914 | 0.0 | 0.9914 |
| 0.0145 | 3.55 | 1240 | 0.0161 | 0.4935 | 0.9871 | 0.9871 | nan | 0.9871 | 0.0 | 0.9871 |
| 0.0139 | 3.61 | 1260 | 0.0155 | 0.4951 | 0.9902 | 0.9902 | nan | 0.9902 | 0.0 | 0.9902 |
| 0.0153 | 3.67 | 1280 | 0.0157 | 0.4942 | 0.9884 | 0.9884 | nan | 0.9884 | 0.0 | 0.9884 |
| 0.0156 | 3.72 | 1300 | 0.0157 | 0.4949 | 0.9898 | 0.9898 | nan | 0.9898 | 0.0 | 0.9898 |
| 0.033 | 3.78 | 1320 | 0.0157 | 0.4952 | 0.9903 | 0.9903 | nan | 0.9903 | 0.0 | 0.9903 |
| 0.0219 | 3.84 | 1340 | 0.0153 | 0.4957 | 0.9915 | 0.9915 | nan | 0.9915 | 0.0 | 0.9915 |
| 0.0166 | 3.9 | 1360 | 0.0162 | 0.4935 | 0.9871 | 0.9871 | nan | 0.9871 | 0.0 | 0.9871 |
| 0.0168 | 3.95 | 1380 | 0.0157 | 0.4949 | 0.9897 | 0.9897 | nan | 0.9897 | 0.0 | 0.9897 |
| 0.0177 | 4.01 | 1400 | 0.0153 | 0.4966 | 0.9932 | 0.9932 | nan | 0.9932 | 0.0 | 0.9932 |
| 0.0136 | 4.07 | 1420 | 0.0150 | 0.4952 | 0.9905 | 0.9905 | nan | 0.9905 | 0.0 | 0.9905 |
| 0.0334 | 4.13 | 1440 | 0.0156 | 0.4956 | 0.9912 | 0.9912 | nan | 0.9912 | 0.0 | 0.9912 |
| 0.019 | 4.18 | 1460 | 0.0154 | 0.4950 | 0.9899 | 0.9899 | nan | 0.9899 | 0.0 | 0.9899 |
| 0.0147 | 4.24 | 1480 | 0.0148 | 0.4960 | 0.9920 | 0.9920 | nan | 0.9920 | 0.0 | 0.9920 |
| 0.0135 | 4.3 | 1500 | 0.0146 | 0.4951 | 0.9902 | 0.9902 | nan | 0.9902 | 0.0 | 0.9902 |
| 0.0186 | 4.36 | 1520 | 0.0143 | 0.4966 | 0.9933 | 0.9933 | nan | 0.9933 | 0.0 | 0.9933 |
| 0.0153 | 4.41 | 1540 | 0.0141 | 0.4954 | 0.9909 | 0.9909 | nan | 0.9909 | 0.0 | 0.9909 |
| 0.0181 | 4.47 | 1560 | 0.0145 | 0.4954 | 0.9908 | 0.9908 | nan | 0.9908 | 0.0 | 0.9908 |
| 0.0266 | 4.53 | 1580 | 0.0146 | 0.4953 | 0.9907 | 0.9907 | nan | 0.9907 | 0.0 | 0.9907 |
| 0.0141 | 4.58 | 1600 | 0.0147 | 0.4952 | 0.9904 | 0.9904 | nan | 0.9904 | 0.0 | 0.9904 |
| 0.0145 | 4.64 | 1620 | 0.0150 | 0.4947 | 0.9894 | 0.9894 | nan | 0.9894 | 0.0 | 0.9894 |
| 0.0128 | 4.7 | 1640 | 0.0151 | 0.4964 | 0.9928 | 0.9928 | nan | 0.9928 | 0.0 | 0.9928 |
| 0.0119 | 4.76 | 1660 | 0.0143 | 0.4948 | 0.9897 | 0.9897 | nan | 0.9897 | 0.0 | 0.9897 |
| 0.0133 | 4.81 | 1680 | 0.0144 | 0.4950 | 0.9900 | 0.9900 | nan | 0.9900 | 0.0 | 0.9900 |
| 0.0151 | 4.87 | 1700 | 0.0143 | 0.4956 | 0.9911 | 0.9911 | nan | 0.9911 | 0.0 | 0.9911 |
| 0.0211 | 4.93 | 1720 | 0.0149 | 0.4965 | 0.9930 | 0.9930 | nan | 0.9930 | 0.0 | 0.9930 |
| 0.0136 | 4.99 | 1740 | 0.0144 | 0.4964 | 0.9928 | 0.9928 | nan | 0.9928 | 0.0 | 0.9928 |
| 0.0129 | 5.04 | 1760 | 0.0142 | 0.4967 | 0.9934 | 0.9934 | nan | 0.9934 | 0.0 | 0.9934 |
| 0.0176 | 5.1 | 1780 | 0.0142 | 0.4965 | 0.9930 | 0.9930 | nan | 0.9930 | 0.0 | 0.9930 |
| 0.0119 | 5.16 | 1800 | 0.0141 | 0.4958 | 0.9916 | 0.9916 | nan | 0.9916 | 0.0 | 0.9916 |
| 0.021 | 5.21 | 1820 | 0.0143 | 0.4960 | 0.9920 | 0.9920 | nan | 0.9920 | 0.0 | 0.9920 |
| 0.0146 | 5.27 | 1840 | 0.0137 | 0.4961 | 0.9922 | 0.9922 | nan | 0.9922 | 0.0 | 0.9922 |
| 0.0158 | 5.33 | 1860 | 0.0138 | 0.4953 | 0.9905 | 0.9905 | nan | 0.9905 | 0.0 | 0.9905 |
| 0.014 | 5.39 | 1880 | 0.0142 | 0.4956 | 0.9913 | 0.9913 | nan | 0.9913 | 0.0 | 0.9913 |
| 0.0145 | 5.44 | 1900 | 0.0145 | 0.4952 | 0.9905 | 0.9905 | nan | 0.9905 | 0.0 | 0.9905 |
| 0.019 | 5.5 | 1920 | 0.0145 | 0.4960 | 0.9920 | 0.9920 | nan | 0.9920 | 0.0 | 0.9920 |
| 0.0134 | 5.56 | 1940 | 0.0143 | 0.4958 | 0.9915 | 0.9915 | nan | 0.9915 | 0.0 | 0.9915 |
| 0.011 | 5.62 | 1960 | 0.0141 | 0.4955 | 0.9910 | 0.9910 | nan | 0.9910 | 0.0 | 0.9910 |
| 0.0159 | 5.67 | 1980 | 0.0143 | 0.4971 | 0.9942 | 0.9942 | nan | 0.9942 | 0.0 | 0.9942 |
| 0.0132 | 5.73 | 2000 | 0.0140 | 0.4966 | 0.9933 | 0.9933 | nan | 0.9933 | 0.0 | 0.9933 |
| 0.017 | 5.79 | 2020 | 0.0136 | 0.4964 | 0.9928 | 0.9928 | nan | 0.9928 | 0.0 | 0.9928 |
| 0.0156 | 5.85 | 2040 | 0.0139 | 0.4951 | 0.9902 | 0.9902 | nan | 0.9902 | 0.0 | 0.9902 |
| 0.0169 | 5.9 | 2060 | 0.0142 | 0.4943 | 0.9887 | 0.9887 | nan | 0.9887 | 0.0 | 0.9887 |
| 0.0337 | 5.96 | 2080 | 0.0145 | 0.4967 | 0.9933 | 0.9933 | nan | 0.9933 | 0.0 | 0.9933 |
| 0.0158 | 6.02 | 2100 | 0.0141 | 0.4949 | 0.9898 | 0.9898 | nan | 0.9898 | 0.0 | 0.9898 |
| 0.0401 | 6.07 | 2120 | 0.0139 | 0.4956 | 0.9912 | 0.9912 | nan | 0.9912 | 0.0 | 0.9912 |
| 0.0629 | 6.13 | 2140 | 0.0138 | 0.4952 | 0.9904 | 0.9904 | nan | 0.9904 | 0.0 | 0.9904 |
| 0.0143 | 6.19 | 2160 | 0.0142 | 0.4967 | 0.9935 | 0.9935 | nan | 0.9935 | 0.0 | 0.9935 |
| 0.0133 | 6.25 | 2180 | 0.0135 | 0.4957 | 0.9915 | 0.9915 | nan | 0.9915 | 0.0 | 0.9915 |
| 0.0326 | 6.3 | 2200 | 0.0139 | 0.4963 | 0.9925 | 0.9925 | nan | 0.9925 | 0.0 | 0.9925 |
| 0.0141 | 6.36 | 2220 | 0.0133 | 0.4955 | 0.9910 | 0.9910 | nan | 0.9910 | 0.0 | 0.9910 |
| 0.0119 | 6.42 | 2240 | 0.0134 | 0.4958 | 0.9915 | 0.9915 | nan | 0.9915 | 0.0 | 0.9915 |
| 0.0133 | 6.48 | 2260 | 0.0139 | 0.4962 | 0.9924 | 0.9924 | nan | 0.9924 | 0.0 | 0.9924 |
| 0.0123 | 6.53 | 2280 | 0.0138 | 0.4967 | 0.9934 | 0.9934 | nan | 0.9934 | 0.0 | 0.9934 |
| 0.014 | 6.59 | 2300 | 0.0138 | 0.4962 | 0.9925 | 0.9925 | nan | 0.9925 | 0.0 | 0.9925 |
| 0.0137 | 6.65 | 2320 | 0.0136 | 0.4958 | 0.9916 | 0.9916 | nan | 0.9916 | 0.0 | 0.9916 |
| 0.0173 | 6.7 | 2340 | 0.0138 | 0.4964 | 0.9928 | 0.9928 | nan | 0.9928 | 0.0 | 0.9928 |
| 0.0137 | 6.76 | 2360 | 0.0136 | 0.4953 | 0.9905 | 0.9905 | nan | 0.9905 | 0.0 | 0.9905 |
| 0.0153 | 6.82 | 2380 | 0.0134 | 0.4958 | 0.9916 | 0.9916 | nan | 0.9916 | 0.0 | 0.9916 |
| 0.0135 | 6.88 | 2400 | 0.0137 | 0.4963 | 0.9926 | 0.9926 | nan | 0.9926 | 0.0 | 0.9926 |
| 0.0151 | 6.93 | 2420 | 0.0137 | 0.4952 | 0.9904 | 0.9904 | nan | 0.9904 | 0.0 | 0.9904 |
| 0.0122 | 6.99 | 2440 | 0.0134 | 0.4959 | 0.9918 | 0.9918 | nan | 0.9918 | 0.0 | 0.9918 |
| 0.013 | 7.05 | 2460 | 0.0135 | 0.4970 | 0.9941 | 0.9941 | nan | 0.9941 | 0.0 | 0.9941 |
| 0.0134 | 7.11 | 2480 | 0.0133 | 0.4964 | 0.9928 | 0.9928 | nan | 0.9928 | 0.0 | 0.9928 |
| 0.0145 | 7.16 | 2500 | 0.0134 | 0.4962 | 0.9924 | 0.9924 | nan | 0.9924 | 0.0 | 0.9924 |
| 0.028 | 7.22 | 2520 | 0.0135 | 0.4962 | 0.9924 | 0.9924 | nan | 0.9924 | 0.0 | 0.9924 |
| 0.0288 | 7.28 | 2540 | 0.0137 | 0.4967 | 0.9933 | 0.9933 | nan | 0.9933 | 0.0 | 0.9933 |
| 0.0117 | 7.34 | 2560 | 0.0135 | 0.4964 | 0.9927 | 0.9927 | nan | 0.9927 | 0.0 | 0.9927 |
| 0.013 | 7.39 | 2580 | 0.0136 | 0.4966 | 0.9932 | 0.9932 | nan | 0.9932 | 0.0 | 0.9932 |
| 0.0158 | 7.45 | 2600 | 0.0134 | 0.4950 | 0.9899 | 0.9899 | nan | 0.9899 | 0.0 | 0.9899 |
| 0.0135 | 7.51 | 2620 | 0.0134 | 0.4964 | 0.9928 | 0.9928 | nan | 0.9928 | 0.0 | 0.9928 |
| 0.0136 | 7.56 | 2640 | 0.0140 | 0.4967 | 0.9935 | 0.9935 | nan | 0.9935 | 0.0 | 0.9935 |
| 0.0396 | 7.62 | 2660 | 0.0133 | 0.4961 | 0.9922 | 0.9922 | nan | 0.9922 | 0.0 | 0.9922 |
| 0.0109 | 7.68 | 2680 | 0.0134 | 0.4963 | 0.9925 | 0.9925 | nan | 0.9925 | 0.0 | 0.9925 |
| 0.0148 | 7.74 | 2700 | 0.0133 | 0.4963 | 0.9925 | 0.9925 | nan | 0.9925 | 0.0 | 0.9925 |
| 0.0121 | 7.79 | 2720 | 0.0140 | 0.4945 | 0.9890 | 0.9890 | nan | 0.9890 | 0.0 | 0.9890 |
| 0.0109 | 7.85 | 2740 | 0.0139 | 0.4957 | 0.9913 | 0.9913 | nan | 0.9913 | 0.0 | 0.9913 |
| 0.014 | 7.91 | 2760 | 0.0135 | 0.4957 | 0.9915 | 0.9915 | nan | 0.9915 | 0.0 | 0.9915 |
| 0.0199 | 7.97 | 2780 | 0.0134 | 0.4959 | 0.9917 | 0.9917 | nan | 0.9917 | 0.0 | 0.9917 |
| 0.0119 | 8.02 | 2800 | 0.0136 | 0.4958 | 0.9916 | 0.9916 | nan | 0.9916 | 0.0 | 0.9916 |
| 0.0129 | 8.08 | 2820 | 0.0136 | 0.4962 | 0.9924 | 0.9924 | nan | 0.9924 | 0.0 | 0.9924 |
| 0.0108 | 8.14 | 2840 | 0.0134 | 0.4959 | 0.9917 | 0.9917 | nan | 0.9917 | 0.0 | 0.9917 |
| 0.0209 | 8.19 | 2860 | 0.0136 | 0.4960 | 0.9920 | 0.9920 | nan | 0.9920 | 0.0 | 0.9920 |
| 0.0154 | 8.25 | 2880 | 0.0137 | 0.4964 | 0.9928 | 0.9928 | nan | 0.9928 | 0.0 | 0.9928 |
| 0.0141 | 8.31 | 2900 | 0.0132 | 0.4965 | 0.9929 | 0.9929 | nan | 0.9929 | 0.0 | 0.9929 |
| 0.0187 | 8.37 | 2920 | 0.0131 | 0.4956 | 0.9912 | 0.9912 | nan | 0.9912 | 0.0 | 0.9912 |
| 0.0124 | 8.42 | 2940 | 0.0133 | 0.4959 | 0.9918 | 0.9918 | nan | 0.9918 | 0.0 | 0.9918 |
| 0.0135 | 8.48 | 2960 | 0.0132 | 0.4963 | 0.9926 | 0.9926 | nan | 0.9926 | 0.0 | 0.9926 |
| 0.0283 | 8.54 | 2980 | 0.0131 | 0.4958 | 0.9917 | 0.9917 | nan | 0.9917 | 0.0 | 0.9917 |
| 0.0691 | 8.6 | 3000 | 0.0131 | 0.4965 | 0.9930 | 0.9930 | nan | 0.9930 | 0.0 | 0.9930 |
| 0.0142 | 8.65 | 3020 | 0.0131 | 0.4965 | 0.9929 | 0.9929 | nan | 0.9929 | 0.0 | 0.9929 |
| 0.0155 | 8.71 | 3040 | 0.0130 | 0.4966 | 0.9931 | 0.9931 | nan | 0.9931 | 0.0 | 0.9931 |
| 0.0115 | 8.77 | 3060 | 0.0129 | 0.4966 | 0.9932 | 0.9932 | nan | 0.9932 | 0.0 | 0.9932 |
| 0.0095 | 8.83 | 3080 | 0.0130 | 0.4963 | 0.9927 | 0.9927 | nan | 0.9927 | 0.0 | 0.9927 |
| 0.012 | 8.88 | 3100 | 0.0132 | 0.4954 | 0.9907 | 0.9907 | nan | 0.9907 | 0.0 | 0.9907 |
| 0.0153 | 8.94 | 3120 | 0.0132 | 0.4965 | 0.9930 | 0.9930 | nan | 0.9930 | 0.0 | 0.9930 |
| 0.0141 | 9.0 | 3140 | 0.0134 | 0.4958 | 0.9917 | 0.9917 | nan | 0.9917 | 0.0 | 0.9917 |
| 0.0141 | 9.05 | 3160 | 0.0133 | 0.4958 | 0.9915 | 0.9915 | nan | 0.9915 | 0.0 | 0.9915 |
| 0.016 | 9.11 | 3180 | 0.0133 | 0.4964 | 0.9929 | 0.9929 | nan | 0.9929 | 0.0 | 0.9929 |
| 0.017 | 9.17 | 3200 | 0.0132 | 0.4965 | 0.9929 | 0.9929 | nan | 0.9929 | 0.0 | 0.9929 |
| 0.0245 | 9.23 | 3220 | 0.0132 | 0.4961 | 0.9921 | 0.9921 | nan | 0.9921 | 0.0 | 0.9921 |
| 0.0101 | 9.28 | 3240 | 0.0132 | 0.4962 | 0.9924 | 0.9924 | nan | 0.9924 | 0.0 | 0.9924 |
| 0.012 | 9.34 | 3260 | 0.0133 | 0.4959 | 0.9917 | 0.9917 | nan | 0.9917 | 0.0 | 0.9917 |
| 0.0111 | 9.4 | 3280 | 0.0133 | 0.4964 | 0.9928 | 0.9928 | nan | 0.9928 | 0.0 | 0.9928 |
| 0.0148 | 9.46 | 3300 | 0.0132 | 0.4962 | 0.9925 | 0.9925 | nan | 0.9925 | 0.0 | 0.9925 |
| 0.0124 | 9.51 | 3320 | 0.0135 | 0.4967 | 0.9934 | 0.9934 | nan | 0.9934 | 0.0 | 0.9934 |
| 0.0209 | 9.57 | 3340 | 0.0133 | 0.4963 | 0.9926 | 0.9926 | nan | 0.9926 | 0.0 | 0.9926 |
| 0.0134 | 9.63 | 3360 | 0.0132 | 0.4960 | 0.9920 | 0.9920 | nan | 0.9920 | 0.0 | 0.9920 |
| 0.0146 | 9.68 | 3380 | 0.0132 | 0.4958 | 0.9916 | 0.9916 | nan | 0.9916 | 0.0 | 0.9916 |
| 0.0217 | 9.74 | 3400 | 0.0132 | 0.4961 | 0.9923 | 0.9923 | nan | 0.9923 | 0.0 | 0.9923 |
| 0.0142 | 9.8 | 3420 | 0.0131 | 0.4961 | 0.9923 | 0.9923 | nan | 0.9923 | 0.0 | 0.9923 |
| 0.0134 | 9.86 | 3440 | 0.0131 | 0.4959 | 0.9918 | 0.9918 | nan | 0.9918 | 0.0 | 0.9918 |
| 0.0131 | 9.91 | 3460 | 0.0131 | 0.4960 | 0.9920 | 0.9920 | nan | 0.9920 | 0.0 | 0.9920 |
| 0.0136 | 9.97 | 3480 | 0.0131 | 0.4961 | 0.9922 | 0.9922 | nan | 0.9922 | 0.0 | 0.9922 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.2.2
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "other", "tags": ["vision", "image-segmentation", "generated_from_trainer"], "base_model": "nvidia/mit-b5", "model-index": [{"name": "segformer-b5-p142-cvat-vgs", "results": []}]} | vigneshgs7/segformer-b5-p142-cvat-vgs | null | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b5",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:23:25+00:00 |
null | null | {"license": "cc-by-sa-3.0"} | sainivikas/sample | null | [
"license:cc-by-sa-3.0",
"region:us"
]
| null | 2024-04-28T18:25:23+00:00 |
|
text-generation | transformers | # mistral-orpo-mix-7k
This model is a ORPO full fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the argilla/dpo-mix-7k dataset with the [huggingface/alignment-handbook](https://github.com/huggingface/alignment-handbook).
## Training procedure
Trained for 4.5 hours on 1xA100
### Aligment Handbook recipe
```yaml
# Model arguments
model_name_or_path: mistralai/Mistral-7B-v0.1
model_revision: main
torch_dtype: bfloat16
use_flash_attention_2: true
trust_remote_code: true
# Data training arguments
chat_template: "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"
dataset_mixer:
argilla/dpo-mix-7k: 1.0
dataset_splits:
- train
- test
preprocessing_num_workers: 8
# ORPOTrainer arguments
bf16: true
beta: 0.05
gradient_accumulation_steps: 8
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
hub_model_id: mistral-orpo-mix-7k
hub_private_repo: true
learning_rate: 5.0e-6
log_level: info
logging_steps: 10
lr_scheduler_type: inverse_sqrt
max_length: 2048
max_prompt_length: 1792
num_train_epochs: 3
optim: adamw_bnb_8bit
output_dir: data/mistral-orpo-mix-7k
per_device_train_batch_size: 4
push_to_hub: true
report_to:
- tensorboard
- wandb
save_strategy: "no"
seed: 42
warmup_steps: 100
```
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"language": ["en"], "license": "apache-2.0", "tags": ["alignment-handbook", "trl", "orpo", "generated_from_trainer"], "datasets": ["argilla/dpo-mix-7k"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-orpo-mix-7k", "results": []}]} | eduagarcia/mistral-orpo-mix-7k | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"orpo",
"generated_from_trainer",
"conversational",
"en",
"dataset:argilla/dpo-mix-7k",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:25:38+00:00 |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-pixelcopter-01", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "32.30 +/- 24.17", "name": "mean_reward", "verified": false}]}]}]} | Fk24/Reinforce-pixelcopter-01 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| null | 2024-04-28T18:25:38+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Meta-Llama-3-8B-Instruct_fictional_arc_Korean_v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_arc_Korean_v1", "results": []}]} | yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Korean_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:27:12+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | happylayers/sc75 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T18:28:09+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmd-8bars-2048-epochs10
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 4
- seed: 1
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4182 | 0.5 | 4994 | 1.4933 |
| 1.4626 | 1.0 | 9988 | 1.3082 |
| 1.3176 | 1.5 | 14982 | 1.2276 |
| 1.2604 | 2.0 | 19976 | 1.1815 |
| 1.2101 | 2.5 | 24970 | 1.1499 |
| 1.1804 | 3.0 | 29964 | 1.1260 |
| 1.1517 | 3.5 | 34958 | 1.1043 |
| 1.1349 | 4.0 | 39952 | 1.0887 |
| 1.1133 | 4.5 | 44946 | 1.0762 |
| 1.0995 | 5.0 | 49940 | 1.0618 |
| 1.0824 | 5.5 | 54934 | 1.0507 |
| 1.0713 | 6.0 | 59928 | 1.0423 |
| 1.0552 | 6.5 | 64922 | 1.0328 |
| 1.0505 | 7.0 | 69916 | 1.0279 |
| 1.0365 | 7.5 | 74910 | 1.0217 |
| 1.0307 | 8.0 | 79904 | 1.0153 |
| 1.022 | 8.5 | 84898 | 1.0107 |
| 1.0189 | 9.0 | 89892 | 1.0090 |
| 1.0129 | 9.5 | 94886 | 1.0084 |
| 1.0139 | 10.0 | 99880 | 1.0086 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "lmd-8bars-2048-epochs10", "results": []}]} | hardikpatel/GPT2_Music_Generation_Trained | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-28T18:29:16+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.