pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_16384_512_34M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3686
- F1 Score: 0.8241
- Accuracy: 0.825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.572 | 1.34 | 200 | 0.5275 | 0.7249 | 0.725 |
| 0.5063 | 2.68 | 400 | 0.5116 | 0.7384 | 0.739 |
| 0.4945 | 4.03 | 600 | 0.5074 | 0.7322 | 0.734 |
| 0.4811 | 5.37 | 800 | 0.4996 | 0.7568 | 0.757 |
| 0.4677 | 6.71 | 1000 | 0.4935 | 0.7517 | 0.753 |
| 0.4588 | 8.05 | 1200 | 0.4824 | 0.7659 | 0.766 |
| 0.4544 | 9.4 | 1400 | 0.4826 | 0.7727 | 0.773 |
| 0.4516 | 10.74 | 1600 | 0.4839 | 0.7701 | 0.771 |
| 0.4469 | 12.08 | 1800 | 0.4875 | 0.7584 | 0.76 |
| 0.4443 | 13.42 | 2000 | 0.4713 | 0.7706 | 0.771 |
| 0.4368 | 14.77 | 2200 | 0.4755 | 0.7630 | 0.763 |
| 0.437 | 16.11 | 2400 | 0.4758 | 0.7708 | 0.771 |
| 0.4347 | 17.45 | 2600 | 0.4772 | 0.7718 | 0.772 |
| 0.4304 | 18.79 | 2800 | 0.4732 | 0.7740 | 0.774 |
| 0.4304 | 20.13 | 3000 | 0.4725 | 0.7670 | 0.767 |
| 0.4255 | 21.48 | 3200 | 0.4698 | 0.7660 | 0.766 |
| 0.4241 | 22.82 | 3400 | 0.4722 | 0.7718 | 0.772 |
| 0.4209 | 24.16 | 3600 | 0.4679 | 0.7689 | 0.769 |
| 0.4251 | 25.5 | 3800 | 0.4727 | 0.7754 | 0.776 |
| 0.4208 | 26.85 | 4000 | 0.4720 | 0.7690 | 0.769 |
| 0.4211 | 28.19 | 4200 | 0.4662 | 0.7750 | 0.775 |
| 0.4115 | 29.53 | 4400 | 0.4735 | 0.7730 | 0.773 |
| 0.4163 | 30.87 | 4600 | 0.4653 | 0.7769 | 0.777 |
| 0.4169 | 32.21 | 4800 | 0.4619 | 0.7710 | 0.771 |
| 0.4139 | 33.56 | 5000 | 0.4677 | 0.7778 | 0.778 |
| 0.4124 | 34.9 | 5200 | 0.4683 | 0.7788 | 0.779 |
| 0.4144 | 36.24 | 5400 | 0.4661 | 0.7750 | 0.775 |
| 0.4103 | 37.58 | 5600 | 0.4625 | 0.7759 | 0.776 |
| 0.4086 | 38.93 | 5800 | 0.4669 | 0.7730 | 0.773 |
| 0.4103 | 40.27 | 6000 | 0.4668 | 0.7780 | 0.778 |
| 0.4069 | 41.61 | 6200 | 0.4690 | 0.7780 | 0.778 |
| 0.409 | 42.95 | 6400 | 0.4653 | 0.7750 | 0.775 |
| 0.4081 | 44.3 | 6600 | 0.4647 | 0.7769 | 0.777 |
| 0.4032 | 45.64 | 6800 | 0.4683 | 0.7780 | 0.778 |
| 0.4088 | 46.98 | 7000 | 0.4669 | 0.7770 | 0.777 |
| 0.4075 | 48.32 | 7200 | 0.4614 | 0.774 | 0.774 |
| 0.398 | 49.66 | 7400 | 0.4703 | 0.7720 | 0.772 |
| 0.4041 | 51.01 | 7600 | 0.4618 | 0.7750 | 0.775 |
| 0.4031 | 52.35 | 7800 | 0.4652 | 0.7819 | 0.782 |
| 0.3981 | 53.69 | 8000 | 0.4672 | 0.7740 | 0.774 |
| 0.4021 | 55.03 | 8200 | 0.4687 | 0.7750 | 0.775 |
| 0.4011 | 56.38 | 8400 | 0.4647 | 0.7740 | 0.774 |
| 0.403 | 57.72 | 8600 | 0.4638 | 0.7808 | 0.781 |
| 0.4007 | 59.06 | 8800 | 0.4637 | 0.7760 | 0.776 |
| 0.3991 | 60.4 | 9000 | 0.4637 | 0.7780 | 0.778 |
| 0.4026 | 61.74 | 9200 | 0.4633 | 0.7779 | 0.778 |
| 0.3975 | 63.09 | 9400 | 0.4641 | 0.7770 | 0.777 |
| 0.3997 | 64.43 | 9600 | 0.4636 | 0.7760 | 0.776 |
| 0.3977 | 65.77 | 9800 | 0.4639 | 0.7760 | 0.776 |
| 0.3996 | 67.11 | 10000 | 0.4640 | 0.7770 | 0.777 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_4-seqsight_16384_512_34M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_16384_512_34M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:42:59+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_4-seqsight\_16384\_512\_34M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3686
* F1 Score: 0.8241
* Accuracy: 0.825
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jiuhai/llama-3-1275 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T23:43:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2", "quantized_by": "mradermacher"} | mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2-GGUF | null | [
"transformers",
"gguf",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:generator",
"base_model:yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T23:47:56+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #trl #sft #generated_from_trainer #en #dataset-generator #base_model-yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2 #license-other #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #trl #sft #generated_from_trainer #en #dataset-generator #base_model-yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Chinese_v2 #license-other #endpoints_compatible #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_16384_512_34M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3854
- F1 Score: 0.8247
- Accuracy: 0.825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5456 | 1.34 | 200 | 0.5073 | 0.7377 | 0.738 |
| 0.4801 | 2.68 | 400 | 0.4954 | 0.7494 | 0.75 |
| 0.4637 | 4.03 | 600 | 0.4884 | 0.7644 | 0.765 |
| 0.4528 | 5.37 | 800 | 0.4954 | 0.7630 | 0.763 |
| 0.4406 | 6.71 | 1000 | 0.4798 | 0.7700 | 0.771 |
| 0.4299 | 8.05 | 1200 | 0.4713 | 0.768 | 0.768 |
| 0.4276 | 9.4 | 1400 | 0.4719 | 0.7786 | 0.779 |
| 0.4207 | 10.74 | 1600 | 0.4698 | 0.7710 | 0.771 |
| 0.4166 | 12.08 | 1800 | 0.4754 | 0.7857 | 0.787 |
| 0.4125 | 13.42 | 2000 | 0.4579 | 0.7770 | 0.777 |
| 0.4027 | 14.77 | 2200 | 0.4712 | 0.7724 | 0.773 |
| 0.4032 | 16.11 | 2400 | 0.4626 | 0.7935 | 0.794 |
| 0.3983 | 17.45 | 2600 | 0.4653 | 0.7839 | 0.784 |
| 0.3941 | 18.79 | 2800 | 0.4561 | 0.7940 | 0.794 |
| 0.3934 | 20.13 | 3000 | 0.4562 | 0.7828 | 0.783 |
| 0.3851 | 21.48 | 3200 | 0.4461 | 0.7959 | 0.796 |
| 0.3837 | 22.82 | 3400 | 0.4452 | 0.8039 | 0.804 |
| 0.3766 | 24.16 | 3600 | 0.4460 | 0.7919 | 0.792 |
| 0.3775 | 25.5 | 3800 | 0.4413 | 0.8114 | 0.812 |
| 0.3727 | 26.85 | 4000 | 0.4392 | 0.8020 | 0.802 |
| 0.371 | 28.19 | 4200 | 0.4370 | 0.7999 | 0.8 |
| 0.3584 | 29.53 | 4400 | 0.4424 | 0.8099 | 0.81 |
| 0.3617 | 30.87 | 4600 | 0.4483 | 0.7885 | 0.789 |
| 0.3601 | 32.21 | 4800 | 0.4387 | 0.8040 | 0.804 |
| 0.3556 | 33.56 | 5000 | 0.4406 | 0.8008 | 0.801 |
| 0.352 | 34.9 | 5200 | 0.4309 | 0.8110 | 0.811 |
| 0.3508 | 36.24 | 5400 | 0.4308 | 0.8060 | 0.806 |
| 0.3438 | 37.58 | 5600 | 0.4266 | 0.816 | 0.816 |
| 0.3418 | 38.93 | 5800 | 0.4413 | 0.8028 | 0.803 |
| 0.3415 | 40.27 | 6000 | 0.4279 | 0.8149 | 0.815 |
| 0.3345 | 41.61 | 6200 | 0.4329 | 0.8110 | 0.811 |
| 0.337 | 42.95 | 6400 | 0.4322 | 0.8090 | 0.809 |
| 0.3375 | 44.3 | 6600 | 0.4238 | 0.8140 | 0.814 |
| 0.3263 | 45.64 | 6800 | 0.4372 | 0.8079 | 0.808 |
| 0.3325 | 46.98 | 7000 | 0.4246 | 0.8130 | 0.813 |
| 0.3285 | 48.32 | 7200 | 0.4229 | 0.8140 | 0.814 |
| 0.3232 | 49.66 | 7400 | 0.4209 | 0.8190 | 0.819 |
| 0.3244 | 51.01 | 7600 | 0.4271 | 0.8120 | 0.812 |
| 0.3236 | 52.35 | 7800 | 0.4205 | 0.8249 | 0.825 |
| 0.3165 | 53.69 | 8000 | 0.4362 | 0.8149 | 0.815 |
| 0.3185 | 55.03 | 8200 | 0.4330 | 0.8129 | 0.813 |
| 0.3131 | 56.38 | 8400 | 0.4220 | 0.8210 | 0.821 |
| 0.3182 | 57.72 | 8600 | 0.4201 | 0.8250 | 0.825 |
| 0.3151 | 59.06 | 8800 | 0.4210 | 0.8190 | 0.819 |
| 0.3113 | 60.4 | 9000 | 0.4245 | 0.8210 | 0.821 |
| 0.3173 | 61.74 | 9200 | 0.4195 | 0.8240 | 0.824 |
| 0.3123 | 63.09 | 9400 | 0.4211 | 0.8220 | 0.822 |
| 0.3141 | 64.43 | 9600 | 0.4203 | 0.82 | 0.82 |
| 0.3088 | 65.77 | 9800 | 0.4216 | 0.8250 | 0.825 |
| 0.311 | 67.11 | 10000 | 0.4217 | 0.8250 | 0.825 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_4-seqsight_16384_512_34M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_16384_512_34M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:48:31+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_4-seqsight\_16384\_512\_34M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3854
* F1 Score: 0.8247
* Accuracy: 0.825
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_16384_512_34M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5901
- F1 Score: 0.8129
- Accuracy: 0.813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5297 | 1.34 | 200 | 0.4928 | 0.7517 | 0.752 |
| 0.4661 | 2.68 | 400 | 0.5149 | 0.7549 | 0.755 |
| 0.4473 | 4.03 | 600 | 0.4853 | 0.7699 | 0.771 |
| 0.4326 | 5.37 | 800 | 0.5007 | 0.7561 | 0.758 |
| 0.4162 | 6.71 | 1000 | 0.4638 | 0.7848 | 0.785 |
| 0.4019 | 8.05 | 1200 | 0.4539 | 0.7849 | 0.785 |
| 0.3924 | 9.4 | 1400 | 0.4498 | 0.7935 | 0.794 |
| 0.3805 | 10.74 | 1600 | 0.4652 | 0.7799 | 0.78 |
| 0.3725 | 12.08 | 1800 | 0.4425 | 0.8020 | 0.802 |
| 0.3594 | 13.42 | 2000 | 0.4628 | 0.7866 | 0.788 |
| 0.3444 | 14.77 | 2200 | 0.4541 | 0.7995 | 0.8 |
| 0.3365 | 16.11 | 2400 | 0.4322 | 0.7997 | 0.8 |
| 0.321 | 17.45 | 2600 | 0.4475 | 0.8050 | 0.805 |
| 0.3122 | 18.79 | 2800 | 0.4481 | 0.8150 | 0.815 |
| 0.304 | 20.13 | 3000 | 0.4576 | 0.7976 | 0.798 |
| 0.2882 | 21.48 | 3200 | 0.4504 | 0.8140 | 0.814 |
| 0.2842 | 22.82 | 3400 | 0.4455 | 0.8108 | 0.811 |
| 0.2708 | 24.16 | 3600 | 0.4433 | 0.8220 | 0.822 |
| 0.2614 | 25.5 | 3800 | 0.4485 | 0.8259 | 0.826 |
| 0.2567 | 26.85 | 4000 | 0.4232 | 0.8210 | 0.821 |
| 0.248 | 28.19 | 4200 | 0.4486 | 0.8220 | 0.822 |
| 0.238 | 29.53 | 4400 | 0.4605 | 0.8158 | 0.816 |
| 0.2309 | 30.87 | 4600 | 0.4366 | 0.8300 | 0.83 |
| 0.2231 | 32.21 | 4800 | 0.4656 | 0.8259 | 0.826 |
| 0.2188 | 33.56 | 5000 | 0.4436 | 0.8300 | 0.83 |
| 0.2102 | 34.9 | 5200 | 0.4568 | 0.8329 | 0.833 |
| 0.2044 | 36.24 | 5400 | 0.4466 | 0.8330 | 0.833 |
| 0.1962 | 37.58 | 5600 | 0.4447 | 0.8429 | 0.843 |
| 0.1926 | 38.93 | 5800 | 0.4466 | 0.8350 | 0.835 |
| 0.185 | 40.27 | 6000 | 0.4797 | 0.8387 | 0.839 |
| 0.1849 | 41.61 | 6200 | 0.4501 | 0.8340 | 0.834 |
| 0.1808 | 42.95 | 6400 | 0.4483 | 0.8430 | 0.843 |
| 0.1787 | 44.3 | 6600 | 0.4778 | 0.8406 | 0.841 |
| 0.1676 | 45.64 | 6800 | 0.4775 | 0.8390 | 0.839 |
| 0.1719 | 46.98 | 7000 | 0.4575 | 0.8360 | 0.836 |
| 0.1691 | 48.32 | 7200 | 0.4528 | 0.8379 | 0.838 |
| 0.1613 | 49.66 | 7400 | 0.4760 | 0.8468 | 0.847 |
| 0.1573 | 51.01 | 7600 | 0.4768 | 0.8479 | 0.848 |
| 0.1559 | 52.35 | 7800 | 0.4849 | 0.8517 | 0.852 |
| 0.1533 | 53.69 | 8000 | 0.4885 | 0.8450 | 0.845 |
| 0.1503 | 55.03 | 8200 | 0.4771 | 0.8539 | 0.854 |
| 0.1471 | 56.38 | 8400 | 0.4908 | 0.8458 | 0.846 |
| 0.1503 | 57.72 | 8600 | 0.4779 | 0.8508 | 0.851 |
| 0.1469 | 59.06 | 8800 | 0.4724 | 0.8508 | 0.851 |
| 0.1389 | 60.4 | 9000 | 0.4945 | 0.8489 | 0.849 |
| 0.1396 | 61.74 | 9200 | 0.4991 | 0.8518 | 0.852 |
| 0.1378 | 63.09 | 9400 | 0.5089 | 0.8498 | 0.85 |
| 0.1412 | 64.43 | 9600 | 0.4989 | 0.8498 | 0.85 |
| 0.1363 | 65.77 | 9800 | 0.4998 | 0.8498 | 0.85 |
| 0.1342 | 67.11 | 10000 | 0.5013 | 0.8508 | 0.851 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_4-seqsight_16384_512_34M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_16384_512_34M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:49:32+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_4-seqsight\_16384\_512\_34M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5901
* F1 Score: 0.8129
* Accuracy: 0.813
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_16384_512_34M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5562
- F1 Score: 0.6981
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6428 | 0.93 | 200 | 0.5758 | 0.6930 | 0.693 |
| 0.6153 | 1.87 | 400 | 0.5634 | 0.7105 | 0.712 |
| 0.607 | 2.8 | 600 | 0.5608 | 0.7004 | 0.705 |
| 0.6014 | 3.74 | 800 | 0.5559 | 0.7119 | 0.713 |
| 0.5963 | 4.67 | 1000 | 0.5524 | 0.7130 | 0.715 |
| 0.5903 | 5.61 | 1200 | 0.5488 | 0.7154 | 0.719 |
| 0.5887 | 6.54 | 1400 | 0.5478 | 0.7196 | 0.722 |
| 0.5883 | 7.48 | 1600 | 0.5478 | 0.7100 | 0.711 |
| 0.5856 | 8.41 | 1800 | 0.5458 | 0.7106 | 0.712 |
| 0.5892 | 9.35 | 2000 | 0.5476 | 0.7058 | 0.708 |
| 0.5813 | 10.28 | 2200 | 0.5492 | 0.7040 | 0.704 |
| 0.5811 | 11.21 | 2400 | 0.5449 | 0.7155 | 0.718 |
| 0.5809 | 12.15 | 2600 | 0.5434 | 0.7063 | 0.712 |
| 0.5763 | 13.08 | 2800 | 0.5436 | 0.7113 | 0.716 |
| 0.5781 | 14.02 | 3000 | 0.5416 | 0.7167 | 0.718 |
| 0.577 | 14.95 | 3200 | 0.5410 | 0.7107 | 0.714 |
| 0.5751 | 15.89 | 3400 | 0.5396 | 0.7140 | 0.716 |
| 0.5736 | 16.82 | 3600 | 0.5405 | 0.7136 | 0.715 |
| 0.5722 | 17.76 | 3800 | 0.5410 | 0.7080 | 0.711 |
| 0.5717 | 18.69 | 4000 | 0.5409 | 0.7128 | 0.714 |
| 0.5709 | 19.63 | 4200 | 0.5433 | 0.7119 | 0.718 |
| 0.5725 | 20.56 | 4400 | 0.5395 | 0.7061 | 0.709 |
| 0.5698 | 21.5 | 4600 | 0.5439 | 0.7171 | 0.717 |
| 0.5686 | 22.43 | 4800 | 0.5380 | 0.7081 | 0.71 |
| 0.5713 | 23.36 | 5000 | 0.5405 | 0.7086 | 0.709 |
| 0.5679 | 24.3 | 5200 | 0.5437 | 0.7121 | 0.712 |
| 0.5615 | 25.23 | 5400 | 0.5383 | 0.7119 | 0.715 |
| 0.5684 | 26.17 | 5600 | 0.5371 | 0.7160 | 0.717 |
| 0.5622 | 27.1 | 5800 | 0.5355 | 0.7152 | 0.717 |
| 0.563 | 28.04 | 6000 | 0.5355 | 0.7140 | 0.715 |
| 0.564 | 28.97 | 6200 | 0.5401 | 0.7161 | 0.716 |
| 0.5643 | 29.91 | 6400 | 0.5363 | 0.7243 | 0.725 |
| 0.5634 | 30.84 | 6600 | 0.5351 | 0.7138 | 0.715 |
| 0.5609 | 31.78 | 6800 | 0.5367 | 0.7177 | 0.718 |
| 0.5595 | 32.71 | 7000 | 0.5351 | 0.7153 | 0.716 |
| 0.567 | 33.64 | 7200 | 0.5350 | 0.7138 | 0.715 |
| 0.5561 | 34.58 | 7400 | 0.5391 | 0.7190 | 0.719 |
| 0.5614 | 35.51 | 7600 | 0.5346 | 0.7167 | 0.717 |
| 0.5596 | 36.45 | 7800 | 0.5360 | 0.7167 | 0.717 |
| 0.5588 | 37.38 | 8000 | 0.5345 | 0.7195 | 0.72 |
| 0.5574 | 38.32 | 8200 | 0.5342 | 0.7151 | 0.716 |
| 0.5601 | 39.25 | 8400 | 0.5339 | 0.7184 | 0.719 |
| 0.5588 | 40.19 | 8600 | 0.5342 | 0.7155 | 0.716 |
| 0.5575 | 41.12 | 8800 | 0.5343 | 0.7163 | 0.717 |
| 0.5628 | 42.06 | 9000 | 0.5343 | 0.7156 | 0.716 |
| 0.5594 | 42.99 | 9200 | 0.5344 | 0.7177 | 0.718 |
| 0.5562 | 43.93 | 9400 | 0.5354 | 0.7178 | 0.718 |
| 0.5572 | 44.86 | 9600 | 0.5343 | 0.7187 | 0.719 |
| 0.5584 | 45.79 | 9800 | 0.5342 | 0.7175 | 0.718 |
| 0.5541 | 46.73 | 10000 | 0.5346 | 0.7166 | 0.717 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_3-seqsight_16384_512_34M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_16384_512_34M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:49:39+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_3-seqsight\_16384\_512\_34M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5562
* F1 Score: 0.6981
* Accuracy: 0.7
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_16384_512_34M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5390
- F1 Score: 0.7280
- Accuracy: 0.728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6328 | 0.93 | 200 | 0.5638 | 0.7110 | 0.714 |
| 0.6044 | 1.87 | 400 | 0.5623 | 0.7070 | 0.707 |
| 0.5948 | 2.8 | 600 | 0.5532 | 0.7065 | 0.71 |
| 0.5882 | 3.74 | 800 | 0.5536 | 0.7091 | 0.709 |
| 0.584 | 4.67 | 1000 | 0.5446 | 0.7099 | 0.711 |
| 0.5769 | 5.61 | 1200 | 0.5432 | 0.7112 | 0.716 |
| 0.5732 | 6.54 | 1400 | 0.5439 | 0.7157 | 0.717 |
| 0.5702 | 7.48 | 1600 | 0.5424 | 0.7162 | 0.717 |
| 0.5669 | 8.41 | 1800 | 0.5448 | 0.7143 | 0.715 |
| 0.5694 | 9.35 | 2000 | 0.5461 | 0.7093 | 0.715 |
| 0.561 | 10.28 | 2200 | 0.5421 | 0.7104 | 0.711 |
| 0.5578 | 11.21 | 2400 | 0.5421 | 0.7219 | 0.723 |
| 0.5579 | 12.15 | 2600 | 0.5454 | 0.7093 | 0.714 |
| 0.5507 | 13.08 | 2800 | 0.5558 | 0.6980 | 0.707 |
| 0.551 | 14.02 | 3000 | 0.5418 | 0.7194 | 0.72 |
| 0.5495 | 14.95 | 3200 | 0.5433 | 0.7046 | 0.708 |
| 0.5461 | 15.89 | 3400 | 0.5376 | 0.7211 | 0.722 |
| 0.5435 | 16.82 | 3600 | 0.5418 | 0.7140 | 0.715 |
| 0.5422 | 17.76 | 3800 | 0.5434 | 0.7123 | 0.716 |
| 0.5407 | 18.69 | 4000 | 0.5432 | 0.7186 | 0.719 |
| 0.5391 | 19.63 | 4200 | 0.5443 | 0.7074 | 0.712 |
| 0.5385 | 20.56 | 4400 | 0.5439 | 0.7127 | 0.716 |
| 0.5358 | 21.5 | 4600 | 0.5461 | 0.7289 | 0.729 |
| 0.533 | 22.43 | 4800 | 0.5382 | 0.7212 | 0.723 |
| 0.5348 | 23.36 | 5000 | 0.5549 | 0.7170 | 0.717 |
| 0.5307 | 24.3 | 5200 | 0.5536 | 0.7218 | 0.722 |
| 0.5235 | 25.23 | 5400 | 0.5532 | 0.7145 | 0.717 |
| 0.528 | 26.17 | 5600 | 0.5455 | 0.7125 | 0.714 |
| 0.5223 | 27.1 | 5800 | 0.5453 | 0.7214 | 0.722 |
| 0.5229 | 28.04 | 6000 | 0.5488 | 0.7136 | 0.715 |
| 0.5211 | 28.97 | 6200 | 0.5534 | 0.7191 | 0.719 |
| 0.522 | 29.91 | 6400 | 0.5509 | 0.7123 | 0.714 |
| 0.5212 | 30.84 | 6600 | 0.5456 | 0.7153 | 0.717 |
| 0.5166 | 31.78 | 6800 | 0.5505 | 0.7230 | 0.723 |
| 0.5157 | 32.71 | 7000 | 0.5453 | 0.7204 | 0.721 |
| 0.5221 | 33.64 | 7200 | 0.5481 | 0.7126 | 0.713 |
| 0.5097 | 34.58 | 7400 | 0.5544 | 0.7211 | 0.721 |
| 0.5173 | 35.51 | 7600 | 0.5476 | 0.7110 | 0.711 |
| 0.5134 | 36.45 | 7800 | 0.5537 | 0.7097 | 0.71 |
| 0.5153 | 37.38 | 8000 | 0.5478 | 0.7103 | 0.711 |
| 0.5088 | 38.32 | 8200 | 0.5528 | 0.7207 | 0.721 |
| 0.5124 | 39.25 | 8400 | 0.5515 | 0.7156 | 0.716 |
| 0.5096 | 40.19 | 8600 | 0.5487 | 0.7137 | 0.714 |
| 0.5091 | 41.12 | 8800 | 0.5523 | 0.7124 | 0.713 |
| 0.5146 | 42.06 | 9000 | 0.5478 | 0.7147 | 0.715 |
| 0.5077 | 42.99 | 9200 | 0.5515 | 0.7129 | 0.713 |
| 0.5061 | 43.93 | 9400 | 0.5532 | 0.7129 | 0.713 |
| 0.5076 | 44.86 | 9600 | 0.5531 | 0.7118 | 0.712 |
| 0.5072 | 45.79 | 9800 | 0.5520 | 0.7108 | 0.711 |
| 0.5042 | 46.73 | 10000 | 0.5526 | 0.7098 | 0.71 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_3-seqsight_16384_512_34M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_16384_512_34M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:50:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_3-seqsight\_16384\_512\_34M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5390
* F1 Score: 0.7280
* Accuracy: 0.728
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/9y8jkih | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T23:50:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_16384_512_34M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5940
- F1 Score: 0.7035
- Accuracy: 0.707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6271 | 0.93 | 200 | 0.5591 | 0.7131 | 0.718 |
| 0.5972 | 1.87 | 400 | 0.5559 | 0.7051 | 0.705 |
| 0.5861 | 2.8 | 600 | 0.5471 | 0.7059 | 0.708 |
| 0.5777 | 3.74 | 800 | 0.5647 | 0.7071 | 0.707 |
| 0.5719 | 4.67 | 1000 | 0.5437 | 0.7045 | 0.705 |
| 0.5632 | 5.61 | 1200 | 0.5432 | 0.7119 | 0.718 |
| 0.5562 | 6.54 | 1400 | 0.5501 | 0.7100 | 0.71 |
| 0.5526 | 7.48 | 1600 | 0.5379 | 0.7161 | 0.717 |
| 0.5461 | 8.41 | 1800 | 0.5408 | 0.7150 | 0.715 |
| 0.5467 | 9.35 | 2000 | 0.5424 | 0.7214 | 0.724 |
| 0.5362 | 10.28 | 2200 | 0.5461 | 0.7246 | 0.725 |
| 0.5302 | 11.21 | 2400 | 0.5507 | 0.7169 | 0.718 |
| 0.5286 | 12.15 | 2600 | 0.5606 | 0.7135 | 0.718 |
| 0.5175 | 13.08 | 2800 | 0.5649 | 0.7006 | 0.707 |
| 0.5157 | 14.02 | 3000 | 0.5523 | 0.7143 | 0.715 |
| 0.5115 | 14.95 | 3200 | 0.5487 | 0.7081 | 0.71 |
| 0.5072 | 15.89 | 3400 | 0.5407 | 0.7235 | 0.724 |
| 0.4986 | 16.82 | 3600 | 0.5673 | 0.7140 | 0.714 |
| 0.4968 | 17.76 | 3800 | 0.5487 | 0.7177 | 0.719 |
| 0.4916 | 18.69 | 4000 | 0.5555 | 0.7181 | 0.718 |
| 0.4885 | 19.63 | 4200 | 0.5612 | 0.7208 | 0.721 |
| 0.4862 | 20.56 | 4400 | 0.5560 | 0.7227 | 0.723 |
| 0.48 | 21.5 | 4600 | 0.5778 | 0.7031 | 0.705 |
| 0.4758 | 22.43 | 4800 | 0.5610 | 0.7157 | 0.716 |
| 0.4735 | 23.36 | 5000 | 0.5741 | 0.7081 | 0.708 |
| 0.4682 | 24.3 | 5200 | 0.5832 | 0.7209 | 0.721 |
| 0.4603 | 25.23 | 5400 | 0.5903 | 0.7306 | 0.731 |
| 0.4631 | 26.17 | 5600 | 0.5795 | 0.7248 | 0.725 |
| 0.4555 | 27.1 | 5800 | 0.5940 | 0.7241 | 0.724 |
| 0.4512 | 28.04 | 6000 | 0.5885 | 0.7220 | 0.722 |
| 0.4483 | 28.97 | 6200 | 0.5963 | 0.7221 | 0.722 |
| 0.4488 | 29.91 | 6400 | 0.5969 | 0.7235 | 0.724 |
| 0.4451 | 30.84 | 6600 | 0.5911 | 0.7198 | 0.72 |
| 0.4362 | 31.78 | 6800 | 0.6090 | 0.7141 | 0.714 |
| 0.4342 | 32.71 | 7000 | 0.6066 | 0.7217 | 0.722 |
| 0.4407 | 33.64 | 7200 | 0.6057 | 0.7121 | 0.712 |
| 0.4251 | 34.58 | 7400 | 0.6102 | 0.7109 | 0.711 |
| 0.4315 | 35.51 | 7600 | 0.6189 | 0.7059 | 0.706 |
| 0.4275 | 36.45 | 7800 | 0.6144 | 0.7171 | 0.717 |
| 0.4286 | 37.38 | 8000 | 0.6003 | 0.7211 | 0.721 |
| 0.4192 | 38.32 | 8200 | 0.6148 | 0.7161 | 0.716 |
| 0.4238 | 39.25 | 8400 | 0.6169 | 0.7131 | 0.713 |
| 0.4229 | 40.19 | 8600 | 0.6093 | 0.7161 | 0.716 |
| 0.4145 | 41.12 | 8800 | 0.6278 | 0.7151 | 0.715 |
| 0.4207 | 42.06 | 9000 | 0.6158 | 0.7141 | 0.714 |
| 0.4136 | 42.99 | 9200 | 0.6266 | 0.7151 | 0.715 |
| 0.4136 | 43.93 | 9400 | 0.6267 | 0.7130 | 0.713 |
| 0.4121 | 44.86 | 9600 | 0.6264 | 0.7161 | 0.716 |
| 0.4078 | 45.79 | 9800 | 0.6284 | 0.7131 | 0.713 |
| 0.4058 | 46.73 | 10000 | 0.6311 | 0.7131 | 0.713 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_3-seqsight_16384_512_34M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_16384_512_34M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:51:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_3-seqsight\_16384\_512\_34M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5940
* F1 Score: 0.7035
* Accuracy: 0.707
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_16384_512_34M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4517
- F1 Score: 0.7858
- Accuracy: 0.786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5922 | 1.34 | 200 | 0.5723 | 0.6936 | 0.701 |
| 0.5466 | 2.68 | 400 | 0.5360 | 0.7268 | 0.728 |
| 0.537 | 4.03 | 600 | 0.5297 | 0.7376 | 0.739 |
| 0.5299 | 5.37 | 800 | 0.5187 | 0.7463 | 0.747 |
| 0.5273 | 6.71 | 1000 | 0.5205 | 0.7375 | 0.74 |
| 0.5214 | 8.05 | 1200 | 0.5109 | 0.7505 | 0.751 |
| 0.5164 | 9.4 | 1400 | 0.5229 | 0.7275 | 0.731 |
| 0.5153 | 10.74 | 1600 | 0.5082 | 0.7427 | 0.744 |
| 0.5138 | 12.08 | 1800 | 0.5076 | 0.7570 | 0.757 |
| 0.512 | 13.42 | 2000 | 0.5041 | 0.7560 | 0.756 |
| 0.5118 | 14.77 | 2200 | 0.5036 | 0.7393 | 0.741 |
| 0.4988 | 16.11 | 2400 | 0.5042 | 0.7518 | 0.752 |
| 0.5056 | 17.45 | 2600 | 0.5020 | 0.7512 | 0.752 |
| 0.502 | 18.79 | 2800 | 0.4991 | 0.7441 | 0.745 |
| 0.4971 | 20.13 | 3000 | 0.4994 | 0.7499 | 0.751 |
| 0.4956 | 21.48 | 3200 | 0.4997 | 0.7559 | 0.756 |
| 0.4985 | 22.82 | 3400 | 0.4957 | 0.7473 | 0.748 |
| 0.4946 | 24.16 | 3600 | 0.4978 | 0.7495 | 0.751 |
| 0.4906 | 25.5 | 3800 | 0.4968 | 0.7459 | 0.747 |
| 0.4923 | 26.85 | 4000 | 0.4988 | 0.7431 | 0.745 |
| 0.4923 | 28.19 | 4200 | 0.4960 | 0.7478 | 0.749 |
| 0.4884 | 29.53 | 4400 | 0.4941 | 0.7598 | 0.76 |
| 0.4872 | 30.87 | 4600 | 0.4941 | 0.7558 | 0.756 |
| 0.4889 | 32.21 | 4800 | 0.4938 | 0.7554 | 0.756 |
| 0.4893 | 33.56 | 5000 | 0.4922 | 0.7660 | 0.766 |
| 0.4889 | 34.9 | 5200 | 0.4936 | 0.7597 | 0.76 |
| 0.4814 | 36.24 | 5400 | 0.4970 | 0.7570 | 0.758 |
| 0.486 | 37.58 | 5600 | 0.4923 | 0.7552 | 0.756 |
| 0.4821 | 38.93 | 5800 | 0.4939 | 0.7531 | 0.754 |
| 0.4829 | 40.27 | 6000 | 0.4940 | 0.7463 | 0.748 |
| 0.4844 | 41.61 | 6200 | 0.4928 | 0.7520 | 0.753 |
| 0.4822 | 42.95 | 6400 | 0.4924 | 0.7657 | 0.766 |
| 0.4808 | 44.3 | 6600 | 0.4916 | 0.7626 | 0.763 |
| 0.4807 | 45.64 | 6800 | 0.4889 | 0.7619 | 0.762 |
| 0.4777 | 46.98 | 7000 | 0.4942 | 0.7498 | 0.751 |
| 0.4778 | 48.32 | 7200 | 0.4922 | 0.7657 | 0.766 |
| 0.4765 | 49.66 | 7400 | 0.4916 | 0.7623 | 0.763 |
| 0.4762 | 51.01 | 7600 | 0.4907 | 0.7636 | 0.764 |
| 0.4799 | 52.35 | 7800 | 0.4926 | 0.7581 | 0.759 |
| 0.4769 | 53.69 | 8000 | 0.4926 | 0.7507 | 0.752 |
| 0.4765 | 55.03 | 8200 | 0.4907 | 0.7614 | 0.762 |
| 0.477 | 56.38 | 8400 | 0.4898 | 0.7648 | 0.765 |
| 0.4774 | 57.72 | 8600 | 0.4925 | 0.7570 | 0.758 |
| 0.4733 | 59.06 | 8800 | 0.4921 | 0.7612 | 0.762 |
| 0.4743 | 60.4 | 9000 | 0.4917 | 0.7614 | 0.762 |
| 0.4742 | 61.74 | 9200 | 0.4918 | 0.7602 | 0.761 |
| 0.4777 | 63.09 | 9400 | 0.4912 | 0.7591 | 0.76 |
| 0.4737 | 64.43 | 9600 | 0.4919 | 0.7612 | 0.762 |
| 0.4749 | 65.77 | 9800 | 0.4914 | 0.7613 | 0.762 |
| 0.4754 | 67.11 | 10000 | 0.4917 | 0.7612 | 0.762 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_2-seqsight_16384_512_34M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_16384_512_34M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:52:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_2-seqsight\_16384\_512\_34M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4517
* F1 Score: 0.7858
* Accuracy: 0.786
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Roberta-base-Rewritten-commit_messages_v1
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0366
- Accuracy: 0.7334
- F1: 0.7333
- Precision: 0.7485
- Recall: 0.7334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.211 | 0.09 | 100 | 0.5038 | 0.7716 | 0.7710 | 0.7712 | 0.7716 |
| 0.1021 | 0.17 | 200 | 1.4349 | 0.6967 | 0.6878 | 0.7575 | 0.6967 |
| 0.21 | 0.26 | 300 | 1.6924 | 0.6772 | 0.6621 | 0.7604 | 0.6772 |
| 0.3015 | 0.34 | 400 | 1.2524 | 0.7688 | 0.7689 | 0.7691 | 0.7688 |
| 0.0545 | 0.43 | 500 | 0.6203 | 0.7967 | 0.7944 | 0.7996 | 0.7967 |
| 0.2606 | 0.52 | 600 | 1.9708 | 0.7812 | 0.7816 | 0.7831 | 0.7812 |
| 0.2303 | 0.6 | 700 | 4.2551 | 0.7003 | 0.6877 | 0.7822 | 0.7003 |
| 0.3791 | 0.69 | 800 | 3.9563 | 0.6449 | 0.6287 | 0.7176 | 0.6449 |
| 0.1818 | 0.77 | 900 | 3.2710 | 0.7413 | 0.7382 | 0.7798 | 0.7413 |
| 0.2664 | 0.86 | 1000 | 3.1223 | 0.7541 | 0.7545 | 0.7637 | 0.7541 |
| 0.0975 | 0.95 | 1100 | 4.2953 | 0.6628 | 0.6504 | 0.7279 | 0.6628 |
| 0.0994 | 1.03 | 1200 | 3.0247 | 0.7577 | 0.7568 | 0.7808 | 0.7577 |
| 0.0 | 1.12 | 1300 | 4.3038 | 0.7094 | 0.7048 | 0.7515 | 0.7094 |
| 0.1308 | 1.2 | 1400 | 3.6026 | 0.7497 | 0.7502 | 0.7576 | 0.7497 |
| 0.0518 | 1.29 | 1500 | 3.1581 | 0.7629 | 0.7634 | 0.7669 | 0.7629 |
| 0.0444 | 1.38 | 1600 | 3.5891 | 0.7485 | 0.7481 | 0.7480 | 0.7485 |
| 0.0904 | 1.46 | 1700 | 3.5088 | 0.7206 | 0.7162 | 0.7224 | 0.7206 |
| 0.2481 | 1.55 | 1800 | 3.5703 | 0.7083 | 0.7077 | 0.7257 | 0.7083 |
| 0.2444 | 1.63 | 1900 | 2.8876 | 0.7740 | 0.7688 | 0.7829 | 0.7740 |
| 0.1379 | 1.72 | 2000 | 3.2268 | 0.7385 | 0.7336 | 0.7426 | 0.7385 |
| 0.1244 | 1.81 | 2100 | 3.4375 | 0.7413 | 0.7417 | 0.7519 | 0.7413 |
| 0.0826 | 1.89 | 2200 | 3.1960 | 0.7688 | 0.7689 | 0.7689 | 0.7688 |
| 0.1172 | 1.98 | 2300 | 3.0243 | 0.7784 | 0.7784 | 0.7784 | 0.7784 |
| 0.1066 | 2.06 | 2400 | 3.2134 | 0.7764 | 0.7769 | 0.7790 | 0.7764 |
| 0.0 | 2.15 | 2500 | 3.3545 | 0.7664 | 0.7670 | 0.7694 | 0.7664 |
| 0.1429 | 2.24 | 2600 | 3.7563 | 0.7393 | 0.7389 | 0.7574 | 0.7393 |
| 0.0451 | 2.32 | 2700 | 3.4993 | 0.7537 | 0.7540 | 0.7548 | 0.7537 |
| 0.0 | 2.41 | 2800 | 3.7249 | 0.7501 | 0.7504 | 0.7511 | 0.7501 |
| 0.0273 | 2.49 | 2900 | 4.2791 | 0.7190 | 0.7175 | 0.7434 | 0.7190 |
| 0.0789 | 2.58 | 3000 | 4.2166 | 0.7178 | 0.7161 | 0.7438 | 0.7178 |
| 0.0527 | 2.67 | 3100 | 4.0366 | 0.7334 | 0.7333 | 0.7485 | 0.7334 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "Roberta-base-Rewritten-commit_messages_v1", "results": []}]} | ManojAlexender/Roberta-base-Rewritten-commit_messages_v1 | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T23:52:28+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| Roberta-base-Rewritten-commit\_messages\_v1
===========================================
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 4.0366
* Accuracy: 0.7334
* F1: 0.7333
* Precision: 0.7485
* Recall: 0.7334
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 64
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_16384_512_34M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4944
- F1 Score: 0.7843
- Accuracy: 0.785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5742 | 1.34 | 200 | 0.5338 | 0.7322 | 0.735 |
| 0.5343 | 2.68 | 400 | 0.5187 | 0.7540 | 0.754 |
| 0.5225 | 4.03 | 600 | 0.5097 | 0.7577 | 0.758 |
| 0.5134 | 5.37 | 800 | 0.5062 | 0.7484 | 0.749 |
| 0.5094 | 6.71 | 1000 | 0.5097 | 0.7500 | 0.75 |
| 0.5022 | 8.05 | 1200 | 0.5060 | 0.7313 | 0.734 |
| 0.4968 | 9.4 | 1400 | 0.5155 | 0.7331 | 0.736 |
| 0.4948 | 10.74 | 1600 | 0.4904 | 0.7496 | 0.75 |
| 0.4917 | 12.08 | 1800 | 0.4910 | 0.7519 | 0.752 |
| 0.4864 | 13.42 | 2000 | 0.4947 | 0.7430 | 0.743 |
| 0.4861 | 14.77 | 2200 | 0.5052 | 0.7332 | 0.738 |
| 0.4726 | 16.11 | 2400 | 0.4964 | 0.7477 | 0.748 |
| 0.4775 | 17.45 | 2600 | 0.4947 | 0.7430 | 0.745 |
| 0.4721 | 18.79 | 2800 | 0.4933 | 0.7437 | 0.745 |
| 0.4653 | 20.13 | 3000 | 0.4921 | 0.7409 | 0.742 |
| 0.4637 | 21.48 | 3200 | 0.4964 | 0.7499 | 0.75 |
| 0.4648 | 22.82 | 3400 | 0.4981 | 0.7490 | 0.75 |
| 0.4579 | 24.16 | 3600 | 0.4962 | 0.7502 | 0.752 |
| 0.4546 | 25.5 | 3800 | 0.4851 | 0.7597 | 0.76 |
| 0.4561 | 26.85 | 4000 | 0.4911 | 0.7505 | 0.752 |
| 0.4529 | 28.19 | 4200 | 0.4919 | 0.7559 | 0.756 |
| 0.4503 | 29.53 | 4400 | 0.4964 | 0.7533 | 0.754 |
| 0.4442 | 30.87 | 4600 | 0.4948 | 0.7523 | 0.753 |
| 0.4427 | 32.21 | 4800 | 0.5022 | 0.7558 | 0.756 |
| 0.4457 | 33.56 | 5000 | 0.5023 | 0.7489 | 0.749 |
| 0.444 | 34.9 | 5200 | 0.5046 | 0.7528 | 0.753 |
| 0.437 | 36.24 | 5400 | 0.5088 | 0.7525 | 0.754 |
| 0.4357 | 37.58 | 5600 | 0.5016 | 0.7557 | 0.756 |
| 0.4354 | 38.93 | 5800 | 0.5091 | 0.7531 | 0.754 |
| 0.4325 | 40.27 | 6000 | 0.5177 | 0.7441 | 0.746 |
| 0.4314 | 41.61 | 6200 | 0.5108 | 0.7555 | 0.757 |
| 0.4299 | 42.95 | 6400 | 0.5078 | 0.7599 | 0.76 |
| 0.4265 | 44.3 | 6600 | 0.5117 | 0.7623 | 0.763 |
| 0.425 | 45.64 | 6800 | 0.5112 | 0.7520 | 0.752 |
| 0.4216 | 46.98 | 7000 | 0.5081 | 0.7596 | 0.76 |
| 0.4221 | 48.32 | 7200 | 0.5152 | 0.7548 | 0.755 |
| 0.4184 | 49.66 | 7400 | 0.5089 | 0.7540 | 0.754 |
| 0.4166 | 51.01 | 7600 | 0.5124 | 0.7519 | 0.752 |
| 0.4203 | 52.35 | 7800 | 0.5132 | 0.7623 | 0.763 |
| 0.4181 | 53.69 | 8000 | 0.5113 | 0.7580 | 0.759 |
| 0.4156 | 55.03 | 8200 | 0.5172 | 0.7548 | 0.755 |
| 0.4136 | 56.38 | 8400 | 0.5149 | 0.7568 | 0.757 |
| 0.4192 | 57.72 | 8600 | 0.5079 | 0.7576 | 0.758 |
| 0.4104 | 59.06 | 8800 | 0.5129 | 0.7586 | 0.759 |
| 0.4129 | 60.4 | 9000 | 0.5173 | 0.7566 | 0.757 |
| 0.4091 | 61.74 | 9200 | 0.5170 | 0.7535 | 0.754 |
| 0.4127 | 63.09 | 9400 | 0.5154 | 0.7577 | 0.758 |
| 0.4114 | 64.43 | 9600 | 0.5171 | 0.7504 | 0.751 |
| 0.4126 | 65.77 | 9800 | 0.5154 | 0.7506 | 0.751 |
| 0.407 | 67.11 | 10000 | 0.5165 | 0.7496 | 0.75 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_2-seqsight_16384_512_34M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_16384_512_34M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:52:31+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_2-seqsight\_16384\_512\_34M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4944
* F1 Score: 0.7843
* Accuracy: 0.785
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Shure-Dev/llava-vima-adapter | null | [
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T23:54:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #tensorboard #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #tensorboard #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/fu2bmri | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T23:55:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_16384_512_34M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4711
- F1 Score: 0.7790
- Accuracy: 0.779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.565 | 1.34 | 200 | 0.5416 | 0.7154 | 0.722 |
| 0.5257 | 2.68 | 400 | 0.5248 | 0.7437 | 0.745 |
| 0.5128 | 4.03 | 600 | 0.5029 | 0.7445 | 0.745 |
| 0.5004 | 5.37 | 800 | 0.5048 | 0.7459 | 0.746 |
| 0.4935 | 6.71 | 1000 | 0.4977 | 0.7440 | 0.744 |
| 0.4833 | 8.05 | 1200 | 0.5044 | 0.7382 | 0.741 |
| 0.4757 | 9.4 | 1400 | 0.5039 | 0.7498 | 0.751 |
| 0.4693 | 10.74 | 1600 | 0.4816 | 0.7569 | 0.757 |
| 0.4619 | 12.08 | 1800 | 0.4916 | 0.7575 | 0.758 |
| 0.4536 | 13.42 | 2000 | 0.4928 | 0.7690 | 0.769 |
| 0.4496 | 14.77 | 2200 | 0.5044 | 0.7509 | 0.754 |
| 0.4345 | 16.11 | 2400 | 0.4973 | 0.7659 | 0.766 |
| 0.4316 | 17.45 | 2600 | 0.5027 | 0.7573 | 0.759 |
| 0.4234 | 18.79 | 2800 | 0.5055 | 0.7577 | 0.758 |
| 0.4115 | 20.13 | 3000 | 0.5125 | 0.7608 | 0.761 |
| 0.4056 | 21.48 | 3200 | 0.5209 | 0.7620 | 0.762 |
| 0.4013 | 22.82 | 3400 | 0.5119 | 0.7526 | 0.753 |
| 0.3899 | 24.16 | 3600 | 0.5325 | 0.7586 | 0.759 |
| 0.3819 | 25.5 | 3800 | 0.5031 | 0.7640 | 0.764 |
| 0.3793 | 26.85 | 4000 | 0.5146 | 0.7646 | 0.765 |
| 0.3698 | 28.19 | 4200 | 0.5650 | 0.7458 | 0.746 |
| 0.3607 | 29.53 | 4400 | 0.5615 | 0.7480 | 0.748 |
| 0.3548 | 30.87 | 4600 | 0.5569 | 0.7541 | 0.755 |
| 0.3482 | 32.21 | 4800 | 0.5775 | 0.7519 | 0.752 |
| 0.3465 | 33.56 | 5000 | 0.5868 | 0.7436 | 0.744 |
| 0.3436 | 34.9 | 5200 | 0.5777 | 0.7490 | 0.75 |
| 0.3278 | 36.24 | 5400 | 0.5758 | 0.7426 | 0.743 |
| 0.3255 | 37.58 | 5600 | 0.5877 | 0.7409 | 0.741 |
| 0.3154 | 38.93 | 5800 | 0.5989 | 0.7477 | 0.748 |
| 0.3152 | 40.27 | 6000 | 0.6108 | 0.7370 | 0.739 |
| 0.3089 | 41.61 | 6200 | 0.5791 | 0.7507 | 0.751 |
| 0.3057 | 42.95 | 6400 | 0.6042 | 0.7510 | 0.751 |
| 0.3025 | 44.3 | 6600 | 0.6257 | 0.7574 | 0.758 |
| 0.2966 | 45.64 | 6800 | 0.6340 | 0.7480 | 0.748 |
| 0.2896 | 46.98 | 7000 | 0.6283 | 0.7428 | 0.743 |
| 0.2867 | 48.32 | 7200 | 0.6385 | 0.7489 | 0.749 |
| 0.2813 | 49.66 | 7400 | 0.6416 | 0.7448 | 0.745 |
| 0.2793 | 51.01 | 7600 | 0.6416 | 0.7499 | 0.75 |
| 0.2805 | 52.35 | 7800 | 0.6432 | 0.7389 | 0.739 |
| 0.273 | 53.69 | 8000 | 0.6337 | 0.7529 | 0.753 |
| 0.2707 | 55.03 | 8200 | 0.6678 | 0.7520 | 0.752 |
| 0.2638 | 56.38 | 8400 | 0.6665 | 0.7490 | 0.749 |
| 0.2719 | 57.72 | 8600 | 0.6461 | 0.7419 | 0.742 |
| 0.2629 | 59.06 | 8800 | 0.6562 | 0.7549 | 0.755 |
| 0.2615 | 60.4 | 9000 | 0.6619 | 0.7469 | 0.747 |
| 0.2599 | 61.74 | 9200 | 0.6712 | 0.7459 | 0.746 |
| 0.2548 | 63.09 | 9400 | 0.6726 | 0.7480 | 0.748 |
| 0.2575 | 64.43 | 9600 | 0.6753 | 0.7438 | 0.744 |
| 0.2567 | 65.77 | 9800 | 0.6751 | 0.7430 | 0.743 |
| 0.2551 | 67.11 | 10000 | 0.6733 | 0.7430 | 0.743 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_2-seqsight_16384_512_34M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_16384_512_34M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:58:30+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_2-seqsight\_16384\_512\_34M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4711
* F1 Score: 0.7790
* Accuracy: 0.779
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# financial-twhin-bert-large-3labels-test1
This model is a fine-tuned version of [Twitter/twhin-bert-large](https://huggingface.co/Twitter/twhin-bert-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3334
- Accuracy: 0.8826
- F1: 0.8823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.656814753771254e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1203
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.9822 | 0.1550 | 100 | 0.7065 | 0.6772 | 0.5469 |
| 0.7307 | 0.3101 | 200 | 0.5716 | 0.7471 | 0.7179 |
| 0.6482 | 0.4651 | 300 | 0.5388 | 0.7716 | 0.7493 |
| 0.6008 | 0.6202 | 400 | 0.4300 | 0.8494 | 0.8446 |
| 0.5237 | 0.7752 | 500 | 0.4190 | 0.8343 | 0.8401 |
| 0.5106 | 0.9302 | 600 | 0.4114 | 0.8444 | 0.8404 |
| 0.4832 | 1.0853 | 700 | 0.3865 | 0.8545 | 0.8596 |
| 0.4031 | 1.2403 | 800 | 0.3741 | 0.8602 | 0.8653 |
| 0.3729 | 1.3953 | 900 | 0.3334 | 0.8826 | 0.8823 |
| 0.3661 | 1.5504 | 1000 | 0.3494 | 0.8725 | 0.8750 |
| 0.332 | 1.7054 | 1100 | 0.3390 | 0.8725 | 0.8753 |
| 0.3637 | 1.8605 | 1200 | 0.3386 | 0.8689 | 0.8724 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "Twitter/twhin-bert-large", "model-index": [{"name": "financial-twhin-bert-large-3labels-test1", "results": []}]} | kekunh/financial-twhin-bert-large-3labels-test1 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Twitter/twhin-bert-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T23:58:50+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-Twitter/twhin-bert-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| financial-twhin-bert-large-3labels-test1
========================================
This model is a fine-tuned version of Twitter/twhin-bert-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3334
* Accuracy: 0.8826
* F1: 0.8823
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 9.656814753771254e-06
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 1203
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9.656814753771254e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 1203\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-Twitter/twhin-bert-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9.656814753771254e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 1203\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": ["trl", "sft"]} | rntc/mistral-ner-ncbi | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-29T23:59:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_16384_512_34M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6538
- F1 Score: 0.3667
- Accuracy: 0.3649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.185 | 0.35 | 200 | 2.1845 | 0.0925 | 0.1377 |
| 2.1799 | 0.7 | 400 | 2.1802 | 0.0773 | 0.1364 |
| 2.1735 | 1.05 | 600 | 2.1707 | 0.1249 | 0.1548 |
| 2.1672 | 1.4 | 800 | 2.1637 | 0.0934 | 0.1508 |
| 2.1568 | 1.75 | 1000 | 2.1498 | 0.1267 | 0.1682 |
| 2.1493 | 2.09 | 1200 | 2.1509 | 0.1186 | 0.1672 |
| 2.141 | 2.44 | 1400 | 2.1377 | 0.1316 | 0.1619 |
| 2.1285 | 2.79 | 1600 | 2.1036 | 0.1714 | 0.2024 |
| 2.1019 | 3.14 | 1800 | 2.0538 | 0.1961 | 0.2256 |
| 2.0577 | 3.49 | 2000 | 2.0193 | 0.2156 | 0.2387 |
| 2.0443 | 3.84 | 2200 | 1.9846 | 0.2462 | 0.2530 |
| 2.0211 | 4.19 | 2400 | 1.9571 | 0.2563 | 0.2651 |
| 1.9871 | 4.54 | 2600 | 1.9217 | 0.2673 | 0.2726 |
| 1.978 | 4.89 | 2800 | 1.8995 | 0.2833 | 0.2897 |
| 1.9533 | 5.24 | 3000 | 1.8685 | 0.2851 | 0.2948 |
| 1.931 | 5.58 | 3200 | 1.8383 | 0.2935 | 0.3019 |
| 1.9066 | 5.93 | 3400 | 1.8339 | 0.2979 | 0.3045 |
| 1.8911 | 6.28 | 3600 | 1.8054 | 0.3185 | 0.3169 |
| 1.8727 | 6.63 | 3800 | 1.7915 | 0.3062 | 0.3163 |
| 1.8683 | 6.98 | 4000 | 1.7709 | 0.3207 | 0.3237 |
| 1.8469 | 7.33 | 4200 | 1.7542 | 0.3355 | 0.3355 |
| 1.8405 | 7.68 | 4400 | 1.7431 | 0.3434 | 0.3390 |
| 1.8212 | 8.03 | 4600 | 1.7404 | 0.3300 | 0.3323 |
| 1.808 | 8.38 | 4800 | 1.7248 | 0.3339 | 0.3396 |
| 1.8189 | 8.73 | 5000 | 1.7224 | 0.3366 | 0.3434 |
| 1.8023 | 9.08 | 5200 | 1.7206 | 0.3467 | 0.3427 |
| 1.7995 | 9.42 | 5400 | 1.7082 | 0.3398 | 0.3484 |
| 1.7902 | 9.77 | 5600 | 1.7052 | 0.3479 | 0.3541 |
| 1.7871 | 10.12 | 5800 | 1.6936 | 0.3555 | 0.3535 |
| 1.7825 | 10.47 | 6000 | 1.6918 | 0.3543 | 0.3551 |
| 1.7831 | 10.82 | 6200 | 1.6883 | 0.3519 | 0.3536 |
| 1.7684 | 11.17 | 6400 | 1.6893 | 0.3454 | 0.3463 |
| 1.7747 | 11.52 | 6600 | 1.6829 | 0.3577 | 0.3562 |
| 1.7713 | 11.87 | 6800 | 1.6801 | 0.3508 | 0.3549 |
| 1.7615 | 12.22 | 7000 | 1.6723 | 0.3609 | 0.3645 |
| 1.7563 | 12.57 | 7200 | 1.6641 | 0.3600 | 0.3641 |
| 1.7649 | 12.91 | 7400 | 1.6695 | 0.3607 | 0.3623 |
| 1.7585 | 13.26 | 7600 | 1.6703 | 0.3517 | 0.3565 |
| 1.7545 | 13.61 | 7800 | 1.6663 | 0.3588 | 0.3607 |
| 1.7538 | 13.96 | 8000 | 1.6648 | 0.3578 | 0.3611 |
| 1.7427 | 14.31 | 8200 | 1.6619 | 0.3559 | 0.3620 |
| 1.754 | 14.66 | 8400 | 1.6603 | 0.3687 | 0.3676 |
| 1.7486 | 15.01 | 8600 | 1.6656 | 0.3538 | 0.3583 |
| 1.7524 | 15.36 | 8800 | 1.6535 | 0.3718 | 0.3696 |
| 1.7398 | 15.71 | 9000 | 1.6560 | 0.3627 | 0.3670 |
| 1.7465 | 16.06 | 9200 | 1.6551 | 0.3645 | 0.3671 |
| 1.7474 | 16.4 | 9400 | 1.6534 | 0.3619 | 0.3666 |
| 1.7433 | 16.75 | 9600 | 1.6560 | 0.3585 | 0.3620 |
| 1.7472 | 17.1 | 9800 | 1.6535 | 0.3620 | 0.3654 |
| 1.7495 | 17.45 | 10000 | 1.6536 | 0.3629 | 0.3661 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_virus_covid-seqsight_16384_512_34M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_16384_512_34M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:59:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_virus\_covid-seqsight\_16384\_512\_34M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6538
* F1 Score: 0.3667
* Accuracy: 0.3649
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_16384_512_34M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2360
- F1 Score: 0.5387
- Accuracy: 0.5447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1846 | 0.35 | 200 | 2.1815 | 0.0993 | 0.1367 |
| 2.177 | 0.7 | 400 | 2.1693 | 0.0977 | 0.1434 |
| 2.1595 | 1.05 | 600 | 2.1526 | 0.1289 | 0.1564 |
| 2.1346 | 1.4 | 800 | 2.0892 | 0.1865 | 0.2173 |
| 2.0629 | 1.75 | 1000 | 1.9790 | 0.2261 | 0.2651 |
| 1.9848 | 2.09 | 1200 | 1.8932 | 0.2653 | 0.2949 |
| 1.9034 | 2.44 | 1400 | 1.8144 | 0.2956 | 0.3072 |
| 1.8515 | 2.79 | 1600 | 1.7238 | 0.3161 | 0.3433 |
| 1.7943 | 3.14 | 1800 | 1.7076 | 0.3292 | 0.3373 |
| 1.7496 | 3.49 | 2000 | 1.6351 | 0.3700 | 0.3796 |
| 1.7256 | 3.84 | 2200 | 1.6071 | 0.3694 | 0.3792 |
| 1.6869 | 4.19 | 2400 | 1.5576 | 0.4093 | 0.4089 |
| 1.6578 | 4.54 | 2600 | 1.5207 | 0.4142 | 0.4260 |
| 1.6324 | 4.89 | 2800 | 1.5063 | 0.4172 | 0.4288 |
| 1.605 | 5.24 | 3000 | 1.4794 | 0.4296 | 0.4360 |
| 1.5848 | 5.58 | 3200 | 1.4543 | 0.4392 | 0.4486 |
| 1.5653 | 5.93 | 3400 | 1.4587 | 0.4262 | 0.4363 |
| 1.5542 | 6.28 | 3600 | 1.4326 | 0.4479 | 0.4486 |
| 1.5348 | 6.63 | 3800 | 1.4298 | 0.4339 | 0.4450 |
| 1.5191 | 6.98 | 4000 | 1.4086 | 0.4577 | 0.4655 |
| 1.5073 | 7.33 | 4200 | 1.3986 | 0.4491 | 0.4633 |
| 1.4959 | 7.68 | 4400 | 1.3786 | 0.4686 | 0.4724 |
| 1.4888 | 8.03 | 4600 | 1.3834 | 0.4658 | 0.4660 |
| 1.4674 | 8.38 | 4800 | 1.3900 | 0.4559 | 0.4687 |
| 1.4747 | 8.73 | 5000 | 1.3600 | 0.4728 | 0.4842 |
| 1.4548 | 9.08 | 5200 | 1.3508 | 0.4773 | 0.4794 |
| 1.4512 | 9.42 | 5400 | 1.3422 | 0.4804 | 0.4912 |
| 1.4369 | 9.77 | 5600 | 1.3414 | 0.4882 | 0.4955 |
| 1.4378 | 10.12 | 5800 | 1.3160 | 0.4988 | 0.5029 |
| 1.4275 | 10.47 | 6000 | 1.3072 | 0.5071 | 0.5118 |
| 1.426 | 10.82 | 6200 | 1.3132 | 0.5105 | 0.5107 |
| 1.4003 | 11.17 | 6400 | 1.3077 | 0.5080 | 0.5143 |
| 1.4042 | 11.52 | 6600 | 1.2944 | 0.5130 | 0.5136 |
| 1.4096 | 11.87 | 6800 | 1.2964 | 0.5100 | 0.5164 |
| 1.4074 | 12.22 | 7000 | 1.2879 | 0.5130 | 0.5185 |
| 1.3861 | 12.57 | 7200 | 1.2776 | 0.5159 | 0.5223 |
| 1.3896 | 12.91 | 7400 | 1.2898 | 0.5104 | 0.5116 |
| 1.3864 | 13.26 | 7600 | 1.2781 | 0.5138 | 0.5177 |
| 1.3784 | 13.61 | 7800 | 1.2689 | 0.5195 | 0.5225 |
| 1.3758 | 13.96 | 8000 | 1.2612 | 0.5254 | 0.5298 |
| 1.3733 | 14.31 | 8200 | 1.2640 | 0.5223 | 0.5268 |
| 1.3752 | 14.66 | 8400 | 1.2577 | 0.5239 | 0.5302 |
| 1.372 | 15.01 | 8600 | 1.2585 | 0.5268 | 0.5299 |
| 1.371 | 15.36 | 8800 | 1.2526 | 0.5300 | 0.5327 |
| 1.3576 | 15.71 | 9000 | 1.2481 | 0.5327 | 0.5375 |
| 1.3643 | 16.06 | 9200 | 1.2469 | 0.5318 | 0.5363 |
| 1.3633 | 16.4 | 9400 | 1.2445 | 0.5353 | 0.5404 |
| 1.3556 | 16.75 | 9600 | 1.2464 | 0.5342 | 0.5369 |
| 1.3613 | 17.1 | 9800 | 1.2437 | 0.5351 | 0.5391 |
| 1.3601 | 17.45 | 10000 | 1.2441 | 0.5356 | 0.5400 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_virus_covid-seqsight_16384_512_34M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_16384_512_34M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-29T23:59:38+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_virus\_covid-seqsight\_16384\_512\_34M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2360
* F1 Score: 0.5387
* Accuracy: 0.5447
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_16384_512_34M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9826
- F1 Score: 0.6360
- Accuracy: 0.6283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1841 | 0.35 | 200 | 2.1825 | 0.1071 | 0.1321 |
| 2.1721 | 0.7 | 400 | 2.1585 | 0.1205 | 0.1510 |
| 2.1245 | 1.05 | 600 | 2.0431 | 0.2050 | 0.2326 |
| 2.0044 | 1.4 | 800 | 1.8727 | 0.2578 | 0.2894 |
| 1.8528 | 1.75 | 1000 | 1.7159 | 0.3434 | 0.3516 |
| 1.7458 | 2.09 | 1200 | 1.6172 | 0.3680 | 0.3886 |
| 1.6515 | 2.44 | 1400 | 1.5211 | 0.4105 | 0.4237 |
| 1.5736 | 2.79 | 1600 | 1.4302 | 0.4420 | 0.4536 |
| 1.5057 | 3.14 | 1800 | 1.3715 | 0.4621 | 0.4738 |
| 1.4434 | 3.49 | 2000 | 1.3366 | 0.4903 | 0.4869 |
| 1.4221 | 3.84 | 2200 | 1.3209 | 0.4903 | 0.5004 |
| 1.3858 | 4.19 | 2400 | 1.2716 | 0.5105 | 0.5124 |
| 1.3699 | 4.54 | 2600 | 1.2453 | 0.5217 | 0.5280 |
| 1.3503 | 4.89 | 2800 | 1.2346 | 0.5183 | 0.5267 |
| 1.3237 | 5.24 | 3000 | 1.2279 | 0.5308 | 0.5320 |
| 1.3119 | 5.58 | 3200 | 1.1937 | 0.5380 | 0.5410 |
| 1.2802 | 5.93 | 3400 | 1.1755 | 0.5521 | 0.5543 |
| 1.271 | 6.28 | 3600 | 1.1716 | 0.5564 | 0.5507 |
| 1.2494 | 6.63 | 3800 | 1.1463 | 0.5595 | 0.5593 |
| 1.2378 | 6.98 | 4000 | 1.1370 | 0.5674 | 0.5681 |
| 1.2271 | 7.33 | 4200 | 1.1182 | 0.5788 | 0.5790 |
| 1.2002 | 7.68 | 4400 | 1.0970 | 0.5929 | 0.5852 |
| 1.2012 | 8.03 | 4600 | 1.0874 | 0.5865 | 0.5868 |
| 1.1805 | 8.38 | 4800 | 1.0946 | 0.5811 | 0.5796 |
| 1.1771 | 8.73 | 5000 | 1.0644 | 0.5943 | 0.5943 |
| 1.1578 | 9.08 | 5200 | 1.0658 | 0.5995 | 0.5913 |
| 1.1528 | 9.42 | 5400 | 1.0480 | 0.5977 | 0.5992 |
| 1.133 | 9.77 | 5600 | 1.0409 | 0.6064 | 0.6041 |
| 1.139 | 10.12 | 5800 | 1.0319 | 0.6076 | 0.6042 |
| 1.1282 | 10.47 | 6000 | 1.0245 | 0.6108 | 0.6075 |
| 1.1134 | 10.82 | 6200 | 1.0224 | 0.6191 | 0.6104 |
| 1.0873 | 11.17 | 6400 | 1.0175 | 0.6123 | 0.6111 |
| 1.0938 | 11.52 | 6600 | 1.0128 | 0.6190 | 0.6110 |
| 1.1039 | 11.87 | 6800 | 1.0059 | 0.6110 | 0.6070 |
| 1.089 | 12.22 | 7000 | 0.9991 | 0.6191 | 0.6168 |
| 1.0764 | 12.57 | 7200 | 0.9963 | 0.6234 | 0.6187 |
| 1.0864 | 12.91 | 7400 | 0.9980 | 0.6206 | 0.6167 |
| 1.0752 | 13.26 | 7600 | 0.9927 | 0.6225 | 0.6168 |
| 1.0694 | 13.61 | 7800 | 0.9894 | 0.6269 | 0.6183 |
| 1.0656 | 13.96 | 8000 | 0.9848 | 0.6261 | 0.6220 |
| 1.0615 | 14.31 | 8200 | 0.9826 | 0.6302 | 0.6242 |
| 1.0637 | 14.66 | 8400 | 0.9836 | 0.6274 | 0.6201 |
| 1.0588 | 15.01 | 8600 | 0.9770 | 0.6270 | 0.6236 |
| 1.0605 | 15.36 | 8800 | 0.9739 | 0.6350 | 0.6246 |
| 1.0473 | 15.71 | 9000 | 0.9681 | 0.6335 | 0.6284 |
| 1.0588 | 16.06 | 9200 | 0.9726 | 0.6314 | 0.6236 |
| 1.0489 | 16.4 | 9400 | 0.9675 | 0.6335 | 0.6297 |
| 1.05 | 16.75 | 9600 | 0.9683 | 0.6317 | 0.6259 |
| 1.0506 | 17.1 | 9800 | 0.9687 | 0.6332 | 0.6275 |
| 1.0487 | 17.45 | 10000 | 0.9688 | 0.6342 | 0.6278 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_virus_covid-seqsight_16384_512_34M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_16384_512_34M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-04-30T00:00:11+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_virus\_covid-seqsight\_16384\_512\_34M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9826
* F1 Score: 0.6360
* Accuracy: 0.6283
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4621
- F1 Score: 0.8157
- Accuracy: 0.8157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5684 | 5.13 | 200 | 0.5014 | 0.7519 | 0.7520 |
| 0.4641 | 10.26 | 400 | 0.4929 | 0.7707 | 0.7716 |
| 0.4367 | 15.38 | 600 | 0.4584 | 0.7978 | 0.7977 |
| 0.4177 | 20.51 | 800 | 0.4528 | 0.7994 | 0.7993 |
| 0.4073 | 25.64 | 1000 | 0.4468 | 0.8109 | 0.8108 |
| 0.3956 | 30.77 | 1200 | 0.4416 | 0.8191 | 0.8189 |
| 0.385 | 35.9 | 1400 | 0.4408 | 0.8174 | 0.8173 |
| 0.3716 | 41.03 | 1600 | 0.4425 | 0.8123 | 0.8124 |
| 0.3613 | 46.15 | 1800 | 0.4527 | 0.8223 | 0.8222 |
| 0.3532 | 51.28 | 2000 | 0.4432 | 0.8141 | 0.8140 |
| 0.3397 | 56.41 | 2200 | 0.4747 | 0.8141 | 0.8140 |
| 0.3327 | 61.54 | 2400 | 0.4702 | 0.8043 | 0.8042 |
| 0.3254 | 66.67 | 2600 | 0.4712 | 0.7995 | 0.7993 |
| 0.3152 | 71.79 | 2800 | 0.4949 | 0.8074 | 0.8075 |
| 0.3136 | 76.92 | 3000 | 0.5058 | 0.7989 | 0.7993 |
| 0.2988 | 82.05 | 3200 | 0.5046 | 0.7962 | 0.7961 |
| 0.2993 | 87.18 | 3400 | 0.5456 | 0.7782 | 0.7798 |
| 0.2886 | 92.31 | 3600 | 0.5096 | 0.7991 | 0.7993 |
| 0.2896 | 97.44 | 3800 | 0.4955 | 0.8011 | 0.8010 |
| 0.2739 | 102.56 | 4000 | 0.5614 | 0.7836 | 0.7847 |
| 0.2688 | 107.69 | 4200 | 0.5333 | 0.7811 | 0.7814 |
| 0.2645 | 112.82 | 4400 | 0.5324 | 0.8010 | 0.8010 |
| 0.2602 | 117.95 | 4600 | 0.5987 | 0.7763 | 0.7781 |
| 0.2547 | 123.08 | 4800 | 0.5673 | 0.7790 | 0.7798 |
| 0.2504 | 128.21 | 5000 | 0.5594 | 0.7827 | 0.7830 |
| 0.2462 | 133.33 | 5200 | 0.5533 | 0.7878 | 0.7879 |
| 0.2404 | 138.46 | 5400 | 0.5762 | 0.7843 | 0.7847 |
| 0.2359 | 143.59 | 5600 | 0.5974 | 0.7807 | 0.7814 |
| 0.2331 | 148.72 | 5800 | 0.5747 | 0.7844 | 0.7847 |
| 0.2264 | 153.85 | 6000 | 0.5946 | 0.7808 | 0.7814 |
| 0.2215 | 158.97 | 6200 | 0.6133 | 0.7778 | 0.7781 |
| 0.221 | 164.1 | 6400 | 0.6028 | 0.7793 | 0.7798 |
| 0.2146 | 169.23 | 6600 | 0.6118 | 0.7744 | 0.7749 |
| 0.2097 | 174.36 | 6800 | 0.6253 | 0.7826 | 0.7830 |
| 0.2099 | 179.49 | 7000 | 0.6308 | 0.7791 | 0.7798 |
| 0.2023 | 184.62 | 7200 | 0.6359 | 0.7761 | 0.7765 |
| 0.2055 | 189.74 | 7400 | 0.6495 | 0.7791 | 0.7798 |
| 0.2018 | 194.87 | 7600 | 0.6648 | 0.7770 | 0.7781 |
| 0.1988 | 200.0 | 7800 | 0.6393 | 0.7743 | 0.7749 |
| 0.1985 | 205.13 | 8000 | 0.6531 | 0.7741 | 0.7749 |
| 0.1926 | 210.26 | 8200 | 0.6560 | 0.7777 | 0.7781 |
| 0.1968 | 215.38 | 8400 | 0.6650 | 0.7757 | 0.7765 |
| 0.1908 | 220.51 | 8600 | 0.6813 | 0.7773 | 0.7781 |
| 0.19 | 225.64 | 8800 | 0.6815 | 0.7705 | 0.7716 |
| 0.189 | 230.77 | 9000 | 0.6706 | 0.7690 | 0.7700 |
| 0.1865 | 235.9 | 9200 | 0.6876 | 0.7705 | 0.7716 |
| 0.1869 | 241.03 | 9400 | 0.6743 | 0.7741 | 0.7749 |
| 0.1814 | 246.15 | 9600 | 0.6795 | 0.7742 | 0.7749 |
| 0.1837 | 251.28 | 9800 | 0.6889 | 0.7722 | 0.7732 |
| 0.1797 | 256.41 | 10000 | 0.6888 | 0.7706 | 0.7716 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:01:17+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_16384\_512\_56M-L1\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4621
* F1 Score: 0.8157
* Accuracy: 0.8157
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4823
- F1 Score: 0.8189
- Accuracy: 0.8189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.519 | 5.13 | 200 | 0.4642 | 0.7928 | 0.7928 |
| 0.4209 | 10.26 | 400 | 0.5113 | 0.7815 | 0.7847 |
| 0.3755 | 15.38 | 600 | 0.4580 | 0.8058 | 0.8059 |
| 0.3391 | 20.51 | 800 | 0.4625 | 0.8073 | 0.8075 |
| 0.3141 | 25.64 | 1000 | 0.5414 | 0.7800 | 0.7814 |
| 0.2883 | 30.77 | 1200 | 0.5123 | 0.7743 | 0.7749 |
| 0.2742 | 35.9 | 1400 | 0.5339 | 0.7697 | 0.7700 |
| 0.253 | 41.03 | 1600 | 0.5208 | 0.7863 | 0.7863 |
| 0.2366 | 46.15 | 1800 | 0.5753 | 0.7700 | 0.7700 |
| 0.2192 | 51.28 | 2000 | 0.5782 | 0.7598 | 0.7602 |
| 0.2056 | 56.41 | 2200 | 0.6293 | 0.7716 | 0.7716 |
| 0.1955 | 61.54 | 2400 | 0.6578 | 0.7633 | 0.7635 |
| 0.1808 | 66.67 | 2600 | 0.6893 | 0.7766 | 0.7765 |
| 0.17 | 71.79 | 2800 | 0.7857 | 0.7517 | 0.7537 |
| 0.1704 | 76.92 | 3000 | 0.6727 | 0.7732 | 0.7732 |
| 0.1547 | 82.05 | 3200 | 0.7369 | 0.7699 | 0.7700 |
| 0.1473 | 87.18 | 3400 | 0.7855 | 0.7675 | 0.7684 |
| 0.1388 | 92.31 | 3600 | 0.7485 | 0.7766 | 0.7765 |
| 0.1359 | 97.44 | 3800 | 0.7609 | 0.7799 | 0.7798 |
| 0.1259 | 102.56 | 4000 | 0.8648 | 0.7595 | 0.7602 |
| 0.1222 | 107.69 | 4200 | 0.8629 | 0.7697 | 0.7700 |
| 0.1145 | 112.82 | 4400 | 0.8603 | 0.7829 | 0.7830 |
| 0.1113 | 117.95 | 4600 | 0.9160 | 0.7692 | 0.7700 |
| 0.1016 | 123.08 | 4800 | 0.8676 | 0.7750 | 0.7749 |
| 0.0992 | 128.21 | 5000 | 0.9185 | 0.7798 | 0.7798 |
| 0.0969 | 133.33 | 5200 | 0.9657 | 0.7632 | 0.7635 |
| 0.0932 | 138.46 | 5400 | 0.9496 | 0.7796 | 0.7798 |
| 0.0927 | 143.59 | 5600 | 0.9236 | 0.7750 | 0.7749 |
| 0.0862 | 148.72 | 5800 | 0.9916 | 0.7813 | 0.7814 |
| 0.0832 | 153.85 | 6000 | 1.0038 | 0.7780 | 0.7781 |
| 0.0792 | 158.97 | 6200 | 1.0431 | 0.7645 | 0.7651 |
| 0.0779 | 164.1 | 6400 | 1.0209 | 0.7766 | 0.7765 |
| 0.0749 | 169.23 | 6600 | 1.0401 | 0.7829 | 0.7830 |
| 0.0766 | 174.36 | 6800 | 1.0782 | 0.7746 | 0.7749 |
| 0.0723 | 179.49 | 7000 | 1.0120 | 0.7864 | 0.7863 |
| 0.0677 | 184.62 | 7200 | 1.0682 | 0.7766 | 0.7765 |
| 0.0718 | 189.74 | 7400 | 1.0821 | 0.7763 | 0.7765 |
| 0.0666 | 194.87 | 7600 | 1.1264 | 0.7796 | 0.7798 |
| 0.0638 | 200.0 | 7800 | 1.1214 | 0.7829 | 0.7830 |
| 0.064 | 205.13 | 8000 | 1.1559 | 0.7707 | 0.7716 |
| 0.0643 | 210.26 | 8200 | 1.1040 | 0.7846 | 0.7847 |
| 0.0618 | 215.38 | 8400 | 1.1528 | 0.7827 | 0.7830 |
| 0.0631 | 220.51 | 8600 | 1.1330 | 0.7828 | 0.7830 |
| 0.0605 | 225.64 | 8800 | 1.1738 | 0.7712 | 0.7716 |
| 0.0593 | 230.77 | 9000 | 1.1314 | 0.7895 | 0.7896 |
| 0.0586 | 235.9 | 9200 | 1.1572 | 0.7829 | 0.7830 |
| 0.0571 | 241.03 | 9400 | 1.1341 | 0.7961 | 0.7961 |
| 0.0581 | 246.15 | 9600 | 1.1606 | 0.7845 | 0.7847 |
| 0.0566 | 251.28 | 9800 | 1.1728 | 0.7795 | 0.7798 |
| 0.0571 | 256.41 | 10000 | 1.1719 | 0.7811 | 0.7814 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:02:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_16384\_512\_56M-L8\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4823
* F1 Score: 0.8189
* Accuracy: 0.8189
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1808
- F1 Score: 0.7990
- Accuracy: 0.7993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4913 | 5.13 | 200 | 0.4688 | 0.7937 | 0.7945 |
| 0.3787 | 10.26 | 400 | 0.4773 | 0.7953 | 0.7961 |
| 0.3192 | 15.38 | 600 | 0.4871 | 0.7993 | 0.7993 |
| 0.2676 | 20.51 | 800 | 0.5482 | 0.7773 | 0.7781 |
| 0.2292 | 25.64 | 1000 | 0.6552 | 0.7570 | 0.7602 |
| 0.1866 | 30.77 | 1200 | 0.6541 | 0.7858 | 0.7863 |
| 0.1568 | 35.9 | 1400 | 0.8015 | 0.7785 | 0.7798 |
| 0.1383 | 41.03 | 1600 | 0.7849 | 0.7820 | 0.7830 |
| 0.1083 | 46.15 | 1800 | 1.0140 | 0.7736 | 0.7749 |
| 0.0948 | 51.28 | 2000 | 0.8802 | 0.7910 | 0.7912 |
| 0.0858 | 56.41 | 2200 | 1.0149 | 0.7938 | 0.7945 |
| 0.0675 | 61.54 | 2400 | 1.0784 | 0.7734 | 0.7749 |
| 0.0675 | 66.67 | 2600 | 1.0166 | 0.7893 | 0.7896 |
| 0.0561 | 71.79 | 2800 | 1.0261 | 0.7977 | 0.7977 |
| 0.0621 | 76.92 | 3000 | 0.9881 | 0.7929 | 0.7928 |
| 0.0503 | 82.05 | 3200 | 1.0578 | 0.7856 | 0.7863 |
| 0.0452 | 87.18 | 3400 | 1.0860 | 0.7977 | 0.7977 |
| 0.0439 | 92.31 | 3600 | 1.2379 | 0.7939 | 0.7945 |
| 0.0423 | 97.44 | 3800 | 1.0856 | 0.7842 | 0.7847 |
| 0.0378 | 102.56 | 4000 | 1.2667 | 0.7887 | 0.7896 |
| 0.0379 | 107.69 | 4200 | 1.1000 | 0.7847 | 0.7847 |
| 0.0339 | 112.82 | 4400 | 1.1685 | 0.7976 | 0.7977 |
| 0.0323 | 117.95 | 4600 | 1.2308 | 0.7789 | 0.7798 |
| 0.0284 | 123.08 | 4800 | 1.2325 | 0.7959 | 0.7961 |
| 0.0305 | 128.21 | 5000 | 1.2844 | 0.7906 | 0.7912 |
| 0.0296 | 133.33 | 5200 | 1.2424 | 0.8025 | 0.8026 |
| 0.0289 | 138.46 | 5400 | 1.2450 | 0.8043 | 0.8042 |
| 0.0257 | 143.59 | 5600 | 1.4375 | 0.7801 | 0.7814 |
| 0.0258 | 148.72 | 5800 | 1.3010 | 0.7958 | 0.7961 |
| 0.0252 | 153.85 | 6000 | 1.2721 | 0.7991 | 0.7993 |
| 0.0242 | 158.97 | 6200 | 1.4027 | 0.7872 | 0.7879 |
| 0.0248 | 164.1 | 6400 | 1.3318 | 0.7889 | 0.7896 |
| 0.0198 | 169.23 | 6600 | 1.4592 | 0.7905 | 0.7912 |
| 0.0239 | 174.36 | 6800 | 1.4135 | 0.7923 | 0.7928 |
| 0.0207 | 179.49 | 7000 | 1.3361 | 0.7976 | 0.7977 |
| 0.0181 | 184.62 | 7200 | 1.5204 | 0.7957 | 0.7961 |
| 0.0173 | 189.74 | 7400 | 1.4896 | 0.7939 | 0.7945 |
| 0.0172 | 194.87 | 7600 | 1.4162 | 0.7959 | 0.7961 |
| 0.0176 | 200.0 | 7800 | 1.4122 | 0.7907 | 0.7912 |
| 0.0179 | 205.13 | 8000 | 1.3226 | 0.8005 | 0.8010 |
| 0.0167 | 210.26 | 8200 | 1.4873 | 0.7890 | 0.7896 |
| 0.0165 | 215.38 | 8400 | 1.5166 | 0.7922 | 0.7928 |
| 0.0153 | 220.51 | 8600 | 1.5241 | 0.7890 | 0.7896 |
| 0.0147 | 225.64 | 8800 | 1.5495 | 0.7874 | 0.7879 |
| 0.0136 | 230.77 | 9000 | 1.5305 | 0.7941 | 0.7945 |
| 0.0139 | 235.9 | 9200 | 1.5601 | 0.7974 | 0.7977 |
| 0.0149 | 241.03 | 9400 | 1.5617 | 0.7890 | 0.7896 |
| 0.0138 | 246.15 | 9600 | 1.5484 | 0.7874 | 0.7879 |
| 0.0136 | 251.28 | 9800 | 1.5533 | 0.7924 | 0.7928 |
| 0.0114 | 256.41 | 10000 | 1.5440 | 0.7941 | 0.7945 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:02:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_16384\_512\_56M-L32\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1808
* F1 Score: 0.7990
* Accuracy: 0.7993
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1295
- F1 Score: 0.9527
- Accuracy: 0.9527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2629 | 0.6 | 200 | 0.1662 | 0.9346 | 0.9346 |
| 0.1698 | 1.2 | 400 | 0.1431 | 0.9432 | 0.9433 |
| 0.1572 | 1.81 | 600 | 0.1328 | 0.9461 | 0.9461 |
| 0.141 | 2.41 | 800 | 0.1267 | 0.9499 | 0.9499 |
| 0.1399 | 3.01 | 1000 | 0.1211 | 0.9561 | 0.9561 |
| 0.1357 | 3.61 | 1200 | 0.1161 | 0.9555 | 0.9555 |
| 0.134 | 4.22 | 1400 | 0.1141 | 0.9574 | 0.9574 |
| 0.1265 | 4.82 | 1600 | 0.1129 | 0.9584 | 0.9584 |
| 0.1241 | 5.42 | 1800 | 0.1109 | 0.9583 | 0.9584 |
| 0.1213 | 6.02 | 2000 | 0.1110 | 0.9565 | 0.9565 |
| 0.1205 | 6.63 | 2200 | 0.1074 | 0.9576 | 0.9576 |
| 0.1167 | 7.23 | 2400 | 0.1070 | 0.9600 | 0.9601 |
| 0.1162 | 7.83 | 2600 | 0.1078 | 0.9597 | 0.9597 |
| 0.1142 | 8.43 | 2800 | 0.1070 | 0.9597 | 0.9597 |
| 0.1177 | 9.04 | 3000 | 0.1044 | 0.9580 | 0.9580 |
| 0.1128 | 9.64 | 3200 | 0.1163 | 0.9572 | 0.9572 |
| 0.1086 | 10.24 | 3400 | 0.1052 | 0.9597 | 0.9597 |
| 0.1132 | 10.84 | 3600 | 0.1075 | 0.9604 | 0.9604 |
| 0.1087 | 11.45 | 3800 | 0.1072 | 0.9593 | 0.9593 |
| 0.109 | 12.05 | 4000 | 0.1058 | 0.9616 | 0.9616 |
| 0.1074 | 12.65 | 4200 | 0.1038 | 0.9604 | 0.9604 |
| 0.1103 | 13.25 | 4400 | 0.1069 | 0.9595 | 0.9595 |
| 0.1073 | 13.86 | 4600 | 0.1047 | 0.9610 | 0.9610 |
| 0.1055 | 14.46 | 4800 | 0.1057 | 0.9604 | 0.9604 |
| 0.1045 | 15.06 | 5000 | 0.1039 | 0.9593 | 0.9593 |
| 0.1039 | 15.66 | 5200 | 0.1036 | 0.9601 | 0.9601 |
| 0.1054 | 16.27 | 5400 | 0.1042 | 0.9595 | 0.9595 |
| 0.1014 | 16.87 | 5600 | 0.1045 | 0.9604 | 0.9604 |
| 0.1016 | 17.47 | 5800 | 0.1051 | 0.9601 | 0.9601 |
| 0.1062 | 18.07 | 6000 | 0.1035 | 0.9593 | 0.9593 |
| 0.1006 | 18.67 | 6200 | 0.1052 | 0.9587 | 0.9587 |
| 0.1044 | 19.28 | 6400 | 0.1037 | 0.9595 | 0.9595 |
| 0.1014 | 19.88 | 6600 | 0.1042 | 0.9593 | 0.9593 |
| 0.1059 | 20.48 | 6800 | 0.1047 | 0.9597 | 0.9597 |
| 0.0979 | 21.08 | 7000 | 0.1046 | 0.9599 | 0.9599 |
| 0.1023 | 21.69 | 7200 | 0.1070 | 0.9606 | 0.9606 |
| 0.0993 | 22.29 | 7400 | 0.1070 | 0.9597 | 0.9597 |
| 0.1017 | 22.89 | 7600 | 0.1033 | 0.9595 | 0.9595 |
| 0.0964 | 23.49 | 7800 | 0.1051 | 0.9587 | 0.9587 |
| 0.0987 | 24.1 | 8000 | 0.1047 | 0.9599 | 0.9599 |
| 0.0997 | 24.7 | 8200 | 0.1039 | 0.9599 | 0.9599 |
| 0.0977 | 25.3 | 8400 | 0.1044 | 0.9604 | 0.9604 |
| 0.0986 | 25.9 | 8600 | 0.1048 | 0.9602 | 0.9602 |
| 0.099 | 26.51 | 8800 | 0.1045 | 0.9597 | 0.9597 |
| 0.099 | 27.11 | 9000 | 0.1037 | 0.9599 | 0.9599 |
| 0.0989 | 27.71 | 9200 | 0.1038 | 0.9602 | 0.9602 |
| 0.0968 | 28.31 | 9400 | 0.1037 | 0.9602 | 0.9602 |
| 0.101 | 28.92 | 9600 | 0.1041 | 0.9597 | 0.9597 |
| 0.0941 | 29.52 | 9800 | 0.1040 | 0.9599 | 0.9599 |
| 0.1012 | 30.12 | 10000 | 0.1039 | 0.9595 | 0.9595 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:02:09+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_16384\_512\_56M-L1\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1295
* F1 Score: 0.9527
* Accuracy: 0.9527
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1227
- F1 Score: 0.9574
- Accuracy: 0.9574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.234 | 0.6 | 200 | 0.1419 | 0.9427 | 0.9427 |
| 0.1453 | 1.2 | 400 | 0.1183 | 0.9542 | 0.9542 |
| 0.1345 | 1.81 | 600 | 0.1111 | 0.9580 | 0.9580 |
| 0.1208 | 2.41 | 800 | 0.1112 | 0.9595 | 0.9595 |
| 0.1216 | 3.01 | 1000 | 0.1085 | 0.9600 | 0.9601 |
| 0.1168 | 3.61 | 1200 | 0.1050 | 0.9602 | 0.9602 |
| 0.1166 | 4.22 | 1400 | 0.1048 | 0.9589 | 0.9589 |
| 0.1102 | 4.82 | 1600 | 0.1029 | 0.9614 | 0.9614 |
| 0.108 | 5.42 | 1800 | 0.1053 | 0.9572 | 0.9572 |
| 0.1058 | 6.02 | 2000 | 0.1018 | 0.9602 | 0.9602 |
| 0.1043 | 6.63 | 2200 | 0.1016 | 0.9617 | 0.9617 |
| 0.1015 | 7.23 | 2400 | 0.1013 | 0.9627 | 0.9627 |
| 0.1001 | 7.83 | 2600 | 0.1015 | 0.9608 | 0.9608 |
| 0.0978 | 8.43 | 2800 | 0.1045 | 0.9616 | 0.9616 |
| 0.1026 | 9.04 | 3000 | 0.0999 | 0.9604 | 0.9604 |
| 0.0969 | 9.64 | 3200 | 0.1051 | 0.9617 | 0.9617 |
| 0.0925 | 10.24 | 3400 | 0.1000 | 0.9621 | 0.9621 |
| 0.0967 | 10.84 | 3600 | 0.1009 | 0.9606 | 0.9606 |
| 0.092 | 11.45 | 3800 | 0.0995 | 0.9623 | 0.9623 |
| 0.0923 | 12.05 | 4000 | 0.1028 | 0.9616 | 0.9616 |
| 0.0925 | 12.65 | 4200 | 0.0979 | 0.9614 | 0.9614 |
| 0.0933 | 13.25 | 4400 | 0.1031 | 0.9616 | 0.9616 |
| 0.0906 | 13.86 | 4600 | 0.0986 | 0.9619 | 0.9619 |
| 0.0888 | 14.46 | 4800 | 0.1023 | 0.9616 | 0.9616 |
| 0.0882 | 15.06 | 5000 | 0.0985 | 0.9625 | 0.9625 |
| 0.0872 | 15.66 | 5200 | 0.0987 | 0.9617 | 0.9617 |
| 0.0868 | 16.27 | 5400 | 0.1003 | 0.9612 | 0.9612 |
| 0.0822 | 16.87 | 5600 | 0.1010 | 0.9621 | 0.9621 |
| 0.0821 | 17.47 | 5800 | 0.1008 | 0.9610 | 0.9610 |
| 0.0864 | 18.07 | 6000 | 0.0973 | 0.9621 | 0.9621 |
| 0.0813 | 18.67 | 6200 | 0.1021 | 0.9608 | 0.9608 |
| 0.0842 | 19.28 | 6400 | 0.1007 | 0.9617 | 0.9617 |
| 0.081 | 19.88 | 6600 | 0.0989 | 0.9625 | 0.9625 |
| 0.0847 | 20.48 | 6800 | 0.1023 | 0.9623 | 0.9623 |
| 0.0775 | 21.08 | 7000 | 0.1023 | 0.9614 | 0.9614 |
| 0.0834 | 21.69 | 7200 | 0.1020 | 0.9604 | 0.9604 |
| 0.0773 | 22.29 | 7400 | 0.1036 | 0.9625 | 0.9625 |
| 0.0805 | 22.89 | 7600 | 0.0975 | 0.9631 | 0.9631 |
| 0.0762 | 23.49 | 7800 | 0.1031 | 0.9621 | 0.9621 |
| 0.0789 | 24.1 | 8000 | 0.0980 | 0.9612 | 0.9612 |
| 0.0782 | 24.7 | 8200 | 0.1002 | 0.9634 | 0.9634 |
| 0.0763 | 25.3 | 8400 | 0.0993 | 0.9623 | 0.9623 |
| 0.0754 | 25.9 | 8600 | 0.1001 | 0.9623 | 0.9623 |
| 0.0771 | 26.51 | 8800 | 0.0999 | 0.9612 | 0.9612 |
| 0.0759 | 27.11 | 9000 | 0.0988 | 0.9614 | 0.9614 |
| 0.0762 | 27.71 | 9200 | 0.0994 | 0.9631 | 0.9631 |
| 0.0754 | 28.31 | 9400 | 0.0985 | 0.9629 | 0.9629 |
| 0.0771 | 28.92 | 9600 | 0.1004 | 0.9604 | 0.9604 |
| 0.0714 | 29.52 | 9800 | 0.0998 | 0.9608 | 0.9608 |
| 0.0773 | 30.12 | 10000 | 0.0995 | 0.9612 | 0.9612 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:03:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_16384\_512\_56M-L8\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1227
* F1 Score: 0.9574
* Accuracy: 0.9574
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1397
- F1 Score: 0.9580
- Accuracy: 0.9580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2149 | 0.6 | 200 | 0.1305 | 0.9489 | 0.9489 |
| 0.1337 | 1.2 | 400 | 0.1162 | 0.9563 | 0.9563 |
| 0.1259 | 1.81 | 600 | 0.1077 | 0.9595 | 0.9595 |
| 0.1126 | 2.41 | 800 | 0.1096 | 0.9599 | 0.9599 |
| 0.1142 | 3.01 | 1000 | 0.1057 | 0.9623 | 0.9623 |
| 0.109 | 3.61 | 1200 | 0.1069 | 0.9614 | 0.9614 |
| 0.1062 | 4.22 | 1400 | 0.1027 | 0.9621 | 0.9621 |
| 0.101 | 4.82 | 1600 | 0.0996 | 0.9631 | 0.9631 |
| 0.0978 | 5.42 | 1800 | 0.1000 | 0.9621 | 0.9621 |
| 0.0947 | 6.02 | 2000 | 0.1025 | 0.9625 | 0.9625 |
| 0.0931 | 6.63 | 2200 | 0.0993 | 0.9614 | 0.9614 |
| 0.0891 | 7.23 | 2400 | 0.0992 | 0.9629 | 0.9629 |
| 0.0866 | 7.83 | 2600 | 0.0990 | 0.9610 | 0.9610 |
| 0.083 | 8.43 | 2800 | 0.1035 | 0.9619 | 0.9619 |
| 0.0869 | 9.04 | 3000 | 0.0996 | 0.9612 | 0.9612 |
| 0.0809 | 9.64 | 3200 | 0.1042 | 0.9634 | 0.9634 |
| 0.075 | 10.24 | 3400 | 0.1011 | 0.9636 | 0.9636 |
| 0.0784 | 10.84 | 3600 | 0.1002 | 0.9633 | 0.9633 |
| 0.0718 | 11.45 | 3800 | 0.1034 | 0.9625 | 0.9625 |
| 0.0717 | 12.05 | 4000 | 0.1085 | 0.9617 | 0.9617 |
| 0.0702 | 12.65 | 4200 | 0.1058 | 0.9623 | 0.9623 |
| 0.0708 | 13.25 | 4400 | 0.1058 | 0.9619 | 0.9619 |
| 0.0688 | 13.86 | 4600 | 0.1014 | 0.9627 | 0.9627 |
| 0.0664 | 14.46 | 4800 | 0.1066 | 0.9636 | 0.9636 |
| 0.0635 | 15.06 | 5000 | 0.0993 | 0.9640 | 0.9640 |
| 0.0623 | 15.66 | 5200 | 0.1077 | 0.9629 | 0.9629 |
| 0.06 | 16.27 | 5400 | 0.1043 | 0.9634 | 0.9634 |
| 0.0579 | 16.87 | 5600 | 0.1035 | 0.9642 | 0.9642 |
| 0.0547 | 17.47 | 5800 | 0.1150 | 0.9593 | 0.9593 |
| 0.0576 | 18.07 | 6000 | 0.1133 | 0.9610 | 0.9610 |
| 0.0531 | 18.67 | 6200 | 0.1164 | 0.9593 | 0.9593 |
| 0.056 | 19.28 | 6400 | 0.1079 | 0.9636 | 0.9636 |
| 0.0525 | 19.88 | 6600 | 0.1099 | 0.9646 | 0.9646 |
| 0.0554 | 20.48 | 6800 | 0.1094 | 0.9644 | 0.9644 |
| 0.0493 | 21.08 | 7000 | 0.1172 | 0.9629 | 0.9629 |
| 0.052 | 21.69 | 7200 | 0.1154 | 0.9617 | 0.9617 |
| 0.0458 | 22.29 | 7400 | 0.1186 | 0.9633 | 0.9633 |
| 0.048 | 22.89 | 7600 | 0.1110 | 0.9629 | 0.9629 |
| 0.0423 | 23.49 | 7800 | 0.1289 | 0.9617 | 0.9617 |
| 0.0472 | 24.1 | 8000 | 0.1136 | 0.9623 | 0.9623 |
| 0.0432 | 24.7 | 8200 | 0.1182 | 0.9629 | 0.9629 |
| 0.043 | 25.3 | 8400 | 0.1209 | 0.9631 | 0.9631 |
| 0.0415 | 25.9 | 8600 | 0.1193 | 0.9633 | 0.9633 |
| 0.0442 | 26.51 | 8800 | 0.1188 | 0.9623 | 0.9623 |
| 0.0408 | 27.11 | 9000 | 0.1158 | 0.9640 | 0.9640 |
| 0.0382 | 27.71 | 9200 | 0.1232 | 0.9633 | 0.9633 |
| 0.0417 | 28.31 | 9400 | 0.1174 | 0.9631 | 0.9631 |
| 0.0398 | 28.92 | 9600 | 0.1214 | 0.9616 | 0.9616 |
| 0.0376 | 29.52 | 9800 | 0.1213 | 0.9617 | 0.9617 |
| 0.0418 | 30.12 | 10000 | 0.1211 | 0.9619 | 0.9619 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:06:53+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_16384\_512\_56M-L32\_f
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1397
* F1 Score: 0.9580
* Accuracy: 0.9580
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 **prompting format**
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Lumimaid-8B-v0.1.
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
- Airoboros (reduced)
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek | {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw"]} | NeverSleep/Llama-3-Lumimaid-8B-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:07:28+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="URL style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 prompting format
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Lumimaid-8B-v0.1.
Switch: 8B - 70B - 70B-alt
## Training data used:
- Aesir datasets
- NoRobots
- limarp - 8k ctx
- toxic-dpo-v0.1-sharegpt
- ToxicQAFinal
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)
- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)
- cgato/SlimOrcaDedupCleaned - 5% (randomly)
- Airoboros (reduced)
- Capybara (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
## Others
Undi: If you want to support us, you can here.
IkariDev: Visit my retro/neocities style website please kek | [
"## Lumimaid 0.1\n\n<center><div style=\"width: 100%;\">\n <img src=\"URL style=\"display: block; margin: auto;\">\n</div></center>\n\nThis model uses the Llama3 prompting format\n\nLlama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.\n\nWe also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.\n\nThis model includes the new Luminae dataset from Ikari.\n\n\nIf you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\nThis repo contains FP16 files of Lumimaid-8B-v0.1.\n\nSwitch: 8B - 70B - 70B-alt",
"## Training data used:\n- Aesir datasets\n- NoRobots\n- limarp - 8k ctx\n- toxic-dpo-v0.1-sharegpt\n- ToxicQAFinal\n- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset\n- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)\n- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)\n- cgato/SlimOrcaDedupCleaned - 5% (randomly)\n- Airoboros (reduced)\n- Capybara (reduced)",
"## Models used (only for 8B)\n\n- Initial LumiMaid 8B Finetune\n- Undi95/Llama-3-Unholy-8B-e4\n- Undi95/Llama-3-LewdPlay-8B",
"## Prompt template: Llama3",
"## Others\n\nUndi: If you want to support us, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Lumimaid 0.1\n\n<center><div style=\"width: 100%;\">\n <img src=\"URL style=\"display: block; margin: auto;\">\n</div></center>\n\nThis model uses the Llama3 prompting format\n\nLlama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.\n\nWe also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.\n\nThis model includes the new Luminae dataset from Ikari.\n\n\nIf you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\nThis repo contains FP16 files of Lumimaid-8B-v0.1.\n\nSwitch: 8B - 70B - 70B-alt",
"## Training data used:\n- Aesir datasets\n- NoRobots\n- limarp - 8k ctx\n- toxic-dpo-v0.1-sharegpt\n- ToxicQAFinal\n- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset\n- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)\n- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)\n- cgato/SlimOrcaDedupCleaned - 5% (randomly)\n- Airoboros (reduced)\n- Capybara (reduced)",
"## Models used (only for 8B)\n\n- Initial LumiMaid 8B Finetune\n- Undi95/Llama-3-Unholy-8B-e4\n- Undi95/Llama-3-LewdPlay-8B",
"## Prompt template: Llama3",
"## Others\n\nUndi: If you want to support us, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jmodel/gemma-2b-sparseft-retro-topk-8229_retro_neuron_changes__checkpoint-25000 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:07:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
From:
https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
---
pipeline_tag: text-generation
license: apache-2.0
language:
- zh
- en
---
# Model Card for MediaTek Research Breeze-7B-32k-Instruct-v1_0
MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use.
[Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) is the base model for the Breeze-7B series.
It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case.
[Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
[Breeze-7B-32k-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Base-v1_0) is extended from the base model with more data, base change, and the disabling of the sliding window.
Roughly speaking, that is equivalent to 44k Traditional Chinese characters.
[Breeze-7B-32k-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Instruct-v1_0) derives from the base model Breeze-7B-32k-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
Practicality-wise:
- Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
- Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
- Breeze-7B-32k-Instruct can perform tasks at a document level (For Chinese, 20 ~ 40 pages).
*A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
## Features
- Breeze-7B-32k-Base-v1_0
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 32k-token context length
- Breeze-7B-32k-Instruct-v1_0
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 32k-token context length
- Multi-turn dialogue (without special handling for harmfulness)
## Model Details
- Breeze-7B-32k-Base-v1_0
- Pretrained from: [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
- Breeze-7B-32k-Instruct-v1_0
- Finetuned from: [Breeze-7B-32k-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Base-v1_0)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
## Long-context Performance
#### Needle-in-a-haystack Performance
We use the passkey retrieval task to test the model's ability to attend to different various depths in a given sequence.
A key in placed within a long context distracting document for the model to retrieve.
The key position is binned into 16 bins, and there are 20 testcases for each bin.
Breeze-7B-32k-Base clears the tasks with 90+% accuracy, shown in the figure below.

#### Long-DRCD Performance
| **Model/Performance(EM)** | **DRCD** | **DRCD-16k** | **DRCD-32k** |
|---------------------------|----------|--------------|--------------|
| **Breeze-7B-32k-Instruct-v1\_0** | 76.9 | 54.82 | 44.26 |
| **Breeze-7B-32k-Base-v1\_0** | 79.73 | 69.68 | 61.55 |
| **Breeze-7B-Base-v1\_0** | 80.61 | 21.79 | 15.29 |
#### Short-Benchmark Performance
| **Model/Performance(EM)** | **TMMLU+** | **MMLU** | **TABLE** | **MT-Bench-tw** | **MT-Bench** |
|---------------------------|----------|--------------|--------------|-----|-----|
| **Breeze-7B-32k-Instruct-v1\_0** | 41.37 | 61.34 | 34 | 5.8 | 7.4 |
| **Breeze-7B-Instruct-v1\_0** | 42.67 | 62.73 | 39.58 | 6.0 | 7.4 |
## Use in Transformers
First, install direct dependencies:
```
pip install transformers torch accelerate
```
<p style="color:red;">Flash-attention2 is strongly recommended for long context scenarios.</p>
```bash
pip install packaging ninja
pip install flash-attn
```
Then load the model in transformers:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-32k-Instruct-v1_0/")
>>> model = AutoModelForCausalLM.from_pretrained(
>>> "MediaTek-Research/Breeze-7B-32k-Instruct-v1_0",
... device_map="auto",
... torch_dtype=torch.bfloat16,
... attn_implementation="flash_attention_2"
... )
>>> chat = [
... {"role": "user", "content": "你好,請問你可以完成什麼任務?"},
... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"},
... {"role": "user", "content": "太棒了!"},
... ]
>>> tokenizer.apply_chat_template(chat, tokenize=False)
"<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] "
# Tokenized results
# ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?']
# ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。']
# ['▁', '太', '棒', '了', '!']
```
## Citation
```
@article{MediaTek-Research2024breeze7b,
title={Breeze-7B Technical Report},
author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu},
year={2024},
eprint={2403.02712},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {} | yabichiu/Breeze-7B-32k-Instruct-v1_0-GGUF | null | [
"gguf",
"arxiv:2403.02712",
"region:us"
] | null | 2024-04-30T00:08:19+00:00 | [
"2403.02712"
] | [] | TAGS
#gguf #arxiv-2403.02712 #region-us
| From:
URL
---
pipeline\_tag: text-generation
license: apache-2.0
language:
* zh
* en
---
Model Card for MediaTek Research Breeze-7B-32k-Instruct-v1\_0
=============================================================
MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of Mistral-7B, specifically intended for Traditional Chinese use.
Breeze-7B-Base is the base model for the Breeze-7B series.
It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case.
Breeze-7B-Instruct derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
Breeze-7B-32k-Base is extended from the base model with more data, base change, and the disabling of the sliding window.
Roughly speaking, that is equivalent to 44k Traditional Chinese characters.
Breeze-7B-32k-Instruct derives from the base model Breeze-7B-32k-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
Practicality-wise:
* Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. See [Inference Performance.]
* Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
* Breeze-7B-32k-Instruct can perform tasks at a document level (For Chinese, 20 ~ 40 pages).
*A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
Features
--------
* Breeze-7B-32k-Base-v1\_0
+ Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
+ 32k-token context length
* Breeze-7B-32k-Instruct-v1\_0
+ Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
+ 32k-token context length
+ Multi-turn dialogue (without special handling for harmfulness)
Model Details
-------------
* Breeze-7B-32k-Base-v1\_0
+ Pretrained from: Breeze-7B-Base
+ Model type: Causal decoder-only transformer language model
+ Language: English and Traditional Chinese (zh-tw)
* Breeze-7B-32k-Instruct-v1\_0
+ Finetuned from: Breeze-7B-32k-Base
+ Model type: Causal decoder-only transformer language model
+ Language: English and Traditional Chinese (zh-tw)
Long-context Performance
------------------------
#### Needle-in-a-haystack Performance
We use the passkey retrieval task to test the model's ability to attend to different various depths in a given sequence.
A key in placed within a long context distracting document for the model to retrieve.
The key position is binned into 16 bins, and there are 20 testcases for each bin.
Breeze-7B-32k-Base clears the tasks with 90+% accuracy, shown in the figure below.
!Needle-in-a-haystack Performance
#### Long-DRCD Performance
#### Short-Benchmark Performance
Use in Transformers
-------------------
First, install direct dependencies:
Flash-attention2 is strongly recommended for long context scenarios.
Then load the model in transformers:
| [
"#### Needle-in-a-haystack Performance\n\n\nWe use the passkey retrieval task to test the model's ability to attend to different various depths in a given sequence.\nA key in placed within a long context distracting document for the model to retrieve.\nThe key position is binned into 16 bins, and there are 20 testcases for each bin.\nBreeze-7B-32k-Base clears the tasks with 90+% accuracy, shown in the figure below.\n!Needle-in-a-haystack Performance",
"#### Long-DRCD Performance",
"#### Short-Benchmark Performance\n\n\n\nUse in Transformers\n-------------------\n\n\nFirst, install direct dependencies:\n\n\nFlash-attention2 is strongly recommended for long context scenarios.\n\n\nThen load the model in transformers:"
] | [
"TAGS\n#gguf #arxiv-2403.02712 #region-us \n",
"#### Needle-in-a-haystack Performance\n\n\nWe use the passkey retrieval task to test the model's ability to attend to different various depths in a given sequence.\nA key in placed within a long context distracting document for the model to retrieve.\nThe key position is binned into 16 bins, and there are 20 testcases for each bin.\nBreeze-7B-32k-Base clears the tasks with 90+% accuracy, shown in the figure below.\n!Needle-in-a-haystack Performance",
"#### Long-DRCD Performance",
"#### Short-Benchmark Performance\n\n\n\nUse in Transformers\n-------------------\n\n\nFirst, install direct dependencies:\n\n\nFlash-attention2 is strongly recommended for long context scenarios.\n\n\nThen load the model in transformers:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | yatharth97/llama-2-7b-chat-yatharth-v4-test | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:08:27+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4156
- F1 Score: 0.8064
- Accuracy: 0.8064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5361 | 0.54 | 200 | 0.5003 | 0.7605 | 0.7622 |
| 0.4824 | 1.08 | 400 | 0.4883 | 0.7687 | 0.7703 |
| 0.4579 | 1.62 | 600 | 0.4578 | 0.7864 | 0.7865 |
| 0.4541 | 2.16 | 800 | 0.4524 | 0.7891 | 0.7892 |
| 0.4471 | 2.7 | 1000 | 0.4554 | 0.7867 | 0.7873 |
| 0.4424 | 3.24 | 1200 | 0.4485 | 0.7899 | 0.7905 |
| 0.4363 | 3.78 | 1400 | 0.4495 | 0.7911 | 0.7919 |
| 0.4322 | 4.32 | 1600 | 0.4463 | 0.7914 | 0.7919 |
| 0.4318 | 4.86 | 1800 | 0.4463 | 0.7925 | 0.7931 |
| 0.4286 | 5.41 | 2000 | 0.4460 | 0.7938 | 0.7943 |
| 0.4279 | 5.95 | 2200 | 0.4408 | 0.7975 | 0.7976 |
| 0.432 | 6.49 | 2400 | 0.4395 | 0.7955 | 0.7959 |
| 0.4253 | 7.03 | 2600 | 0.4357 | 0.7977 | 0.7978 |
| 0.4233 | 7.57 | 2800 | 0.4454 | 0.7959 | 0.7963 |
| 0.4187 | 8.11 | 3000 | 0.4448 | 0.7942 | 0.7949 |
| 0.4185 | 8.65 | 3200 | 0.4344 | 0.8005 | 0.8007 |
| 0.4221 | 9.19 | 3400 | 0.4366 | 0.8009 | 0.8010 |
| 0.4192 | 9.73 | 3600 | 0.4440 | 0.7962 | 0.7970 |
| 0.4211 | 10.27 | 3800 | 0.4432 | 0.7974 | 0.7981 |
| 0.417 | 10.81 | 4000 | 0.4325 | 0.7993 | 0.7993 |
| 0.418 | 11.35 | 4200 | 0.4410 | 0.7989 | 0.7993 |
| 0.4182 | 11.89 | 4400 | 0.4382 | 0.7967 | 0.7975 |
| 0.4173 | 12.43 | 4600 | 0.4302 | 0.8029 | 0.8029 |
| 0.414 | 12.97 | 4800 | 0.4365 | 0.7996 | 0.8 |
| 0.4125 | 13.51 | 5000 | 0.4473 | 0.7934 | 0.7946 |
| 0.414 | 14.05 | 5200 | 0.4353 | 0.8021 | 0.8025 |
| 0.4113 | 14.59 | 5400 | 0.4327 | 0.8021 | 0.8022 |
| 0.4119 | 15.14 | 5600 | 0.4294 | 0.8063 | 0.8063 |
| 0.412 | 15.68 | 5800 | 0.4459 | 0.7953 | 0.7965 |
| 0.4067 | 16.22 | 6000 | 0.4466 | 0.7970 | 0.7980 |
| 0.4126 | 16.76 | 6200 | 0.4358 | 0.8013 | 0.8019 |
| 0.4136 | 17.3 | 6400 | 0.4366 | 0.8009 | 0.8015 |
| 0.4103 | 17.84 | 6600 | 0.4335 | 0.8032 | 0.8035 |
| 0.4094 | 18.38 | 6800 | 0.4473 | 0.7937 | 0.7949 |
| 0.4071 | 18.92 | 7000 | 0.4349 | 0.8013 | 0.8019 |
| 0.4021 | 19.46 | 7200 | 0.4375 | 0.8012 | 0.8017 |
| 0.4129 | 20.0 | 7400 | 0.4320 | 0.8044 | 0.8047 |
| 0.4076 | 20.54 | 7600 | 0.4361 | 0.8031 | 0.8035 |
| 0.4056 | 21.08 | 7800 | 0.4344 | 0.8026 | 0.8030 |
| 0.4061 | 21.62 | 8000 | 0.4287 | 0.8030 | 0.8032 |
| 0.4071 | 22.16 | 8200 | 0.4319 | 0.8035 | 0.8039 |
| 0.409 | 22.7 | 8400 | 0.4350 | 0.8012 | 0.8019 |
| 0.4023 | 23.24 | 8600 | 0.4286 | 0.8036 | 0.8039 |
| 0.405 | 23.78 | 8800 | 0.4320 | 0.8029 | 0.8034 |
| 0.4038 | 24.32 | 9000 | 0.4309 | 0.8044 | 0.8047 |
| 0.4032 | 24.86 | 9200 | 0.4313 | 0.8030 | 0.8034 |
| 0.4118 | 25.41 | 9400 | 0.4308 | 0.8033 | 0.8037 |
| 0.4014 | 25.95 | 9600 | 0.4311 | 0.8040 | 0.8044 |
| 0.4074 | 26.49 | 9800 | 0.4322 | 0.8034 | 0.8039 |
| 0.4013 | 27.03 | 10000 | 0.4322 | 0.8036 | 0.8041 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:09:21+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_16384\_512\_56M-L1\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4156
* F1 Score: 0.8064
* Accuracy: 0.8064
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-attempt2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2685
- Accuracy: 0.9373
- F1: 0.9372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 9
- eval_batch_size: 9
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 108
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0285 | 1.0 | 646 | 0.9877 | 0.7123 | 0.7029 |
| 0.6367 | 2.0 | 1293 | 0.5708 | 0.8314 | 0.8333 |
| 0.3915 | 3.0 | 1940 | 0.4389 | 0.8745 | 0.8730 |
| 0.2359 | 4.0 | 2587 | 0.3361 | 0.9077 | 0.9082 |
| 0.0987 | 5.0 | 3234 | 0.2901 | 0.9246 | 0.9248 |
| 0.0705 | 5.99 | 3876 | 0.2685 | 0.9373 | 0.9372 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "wav2vec2-attempt2", "results": []}]} | mageec/wav2vec2-attempt2 | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:13:57+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-attempt2
=================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2685
* Accuracy: 0.9373
* F1: 0.9372
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 9
* eval\_batch\_size: 9
* seed: 42
* gradient\_accumulation\_steps: 12
* total\_train\_batch\_size: 108
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 6
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.0
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 9\n* eval\\_batch\\_size: 9\n* seed: 42\n* gradient\\_accumulation\\_steps: 12\n* total\\_train\\_batch\\_size: 108\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 6\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 9\n* eval\\_batch\\_size: 9\n* seed: 42\n* gradient\\_accumulation\\_steps: 12\n* total\\_train\\_batch\\_size: 108\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 6\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.0"
] |
text-generation | transformers | # tmp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.
### Models Merged
The following models were included in the merge:
* ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
* ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
dtype: bfloat16
merge_method: task_arithmetic
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 4]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.4520595057576112
- layer_range: [0, 4]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.1600776520249821
- layer_range: [0, 4]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.5490392773476699
- sources:
- layer_range: [4, 8]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.4227443099700199
- layer_range: [4, 8]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: -0.30631262406307586
- layer_range: [4, 8]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.6904255251091812
- sources:
- layer_range: [8, 12]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.27934507955064164
- layer_range: [8, 12]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.13357572581279714
- layer_range: [8, 12]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 1.0878530319347262
- sources:
- layer_range: [12, 16]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.2797021800421193
- layer_range: [12, 16]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: -0.20082135736432433
- layer_range: [12, 16]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.8701476132113257
- sources:
- layer_range: [16, 20]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.3344752410343695
- layer_range: [16, 20]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.4042316772497608
- layer_range: [16, 20]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.6927692531006349
- sources:
- layer_range: [20, 24]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: -0.21516362235239625
- layer_range: [20, 24]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.24938519228176126
- layer_range: [20, 24]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: -0.04429340576598181
- sources:
- layer_range: [24, 28]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.5010710670606616
- layer_range: [24, 28]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.7228729104891786
- layer_range: [24, 28]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.20859571492467427
- sources:
- layer_range: [28, 32]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.38158489002927837
- layer_range: [28, 32]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.3831129222059622
- layer_range: [28, 32]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.3132094789750319
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []} | yuiseki/YuisekinAIEvol-Mistral-7B-ja-math-v0.1.1-1k | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:15:31+00:00 | [
"2212.04089"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2212.04089 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # tmp
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the task arithmetic merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.
### Models Merged
The following models were included in the merge:
* ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
* ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
### Configuration
The following YAML configuration was used to produce this model:
| [
"# tmp\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task arithmetic merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689\n* ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2212.04089 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# tmp\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task arithmetic merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689\n* ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Hermes 2 Pro - Llama-3 8B

## Model Description
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Example Outputs
### Ask for a structured JSON output:

### Write the plot for a story where anime became real life:

### Coding Assistance

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5520|± |0.0145|
| | |acc_norm|0.5887|± |0.0144|
|arc_easy | 0|acc |0.8350|± |0.0076|
| | |acc_norm|0.8123|± |0.0080|
|boolq | 1|acc |0.8584|± |0.0061|
|hellaswag | 0|acc |0.6265|± |0.0048|
| | |acc_norm|0.8053|± |0.0040|
|openbookqa | 0|acc |0.3800|± |0.0217|
| | |acc_norm|0.4580|± |0.0223|
|piqa | 0|acc |0.8003|± |0.0093|
| | |acc_norm|0.8118|± |0.0091|
|winogrande | 0|acc |0.7490|± |0.0122|
```
Average: 72.62
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2520|± |0.0273|
| | |acc_norm|0.2559|± |0.0274|
|agieval_logiqa_en | 0|acc |0.3548|± |0.0188|
| | |acc_norm|0.3625|± |0.0189|
|agieval_lsat_ar | 0|acc |0.1826|± |0.0255|
| | |acc_norm|0.1913|± |0.0260|
|agieval_lsat_lr | 0|acc |0.5510|± |0.0220|
| | |acc_norm|0.5255|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6431|± |0.0293|
| | |acc_norm|0.6097|± |0.0298|
|agieval_sat_en | 0|acc |0.7330|± |0.0309|
| | |acc_norm|0.7039|± |0.0319|
|agieval_sat_en_without_passage| 0|acc |0.4029|± |0.0343|
| | |acc_norm|0.3689|± |0.0337|
|agieval_sat_math | 0|acc |0.3909|± |0.0330|
| | |acc_norm|0.3773|± |0.0328|
```
Average: 42.44
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|± |0.0360|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6667|± |0.0246|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|± |0.0201|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|± |0.0152|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|± |0.0222|
|bigbench_navigate | 0|multiple_choice_grade|0.4980|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4688|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|± |0.0126|
|bigbench_snarks | 0|multiple_choice_grade|0.7403|± |0.0327|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|± |0.0158|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|± |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|± |0.0288|
```
Average: 43.55
## TruthfulQA:
```
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.410|± |0.0172|
| | |mc2 |0.578|± |0.0157|
```
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Pro-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)},
title={Hermes-2-Pro-Llama-3-8B},
author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]} | NousResearch/Hermes-2-Pro-Llama-3-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | null | 2024-04-30T00:16:18+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #has_space
|
# Hermes 2 Pro - Llama-3 8B
!image/png
## Model Description
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: URL
## Example Outputs
### Ask for a structured JSON output:
!image/png
### Write the plot for a story where anime became real life:
!image/png
### Coding Assistance
!image/png
# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
This prompt is available as a chat template, which means you can format messages using the
'tokenizer.apply_chat_template()' method:
When tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\n' to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
To complete the function call, create a user prompt that follows the above system prompt, like so:
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:
The assistant will then read in that data from the function's response, and generate a natural language response:
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks
!image/png
## GPT4All:
Average: 72.62
## AGIEval:
Average: 42.44
## BigBench:
Average: 43.55
## TruthfulQA:
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
URL
!image/png
# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:
!image/png
## Quantized Versions:
GGUF Versions Available Here: URL
# How to cite:
| [
"# Hermes 2 Pro - Llama-3 8B\n\n!image/png",
"## Model Description\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL",
"## Example Outputs",
"### Ask for a structured JSON output:\n!image/png",
"### Write the plot for a story where anime became real life:\n!image/png",
"### Coding Assistance\n!image/png",
"# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.",
"## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:",
"## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.",
"# Benchmarks\n\n!image/png",
"## GPT4All:\n\nAverage: 72.62",
"## AGIEval:\n\nAverage: 42.44",
"## BigBench:\n\nAverage: 43.55",
"## TruthfulQA:",
"# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.",
"## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png",
"# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png",
"## Quantized Versions:\n\nGGUF Versions Available Here: URL",
"# How to cite:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #has_space \n",
"# Hermes 2 Pro - Llama-3 8B\n\n!image/png",
"## Model Description\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL",
"## Example Outputs",
"### Ask for a structured JSON output:\n!image/png",
"### Write the plot for a story where anime became real life:\n!image/png",
"### Coding Assistance\n!image/png",
"# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.",
"## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:",
"## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.",
"# Benchmarks\n\n!image/png",
"## GPT4All:\n\nAverage: 72.62",
"## AGIEval:\n\nAverage: 42.44",
"## BigBench:\n\nAverage: 43.55",
"## TruthfulQA:",
"# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.",
"## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png",
"# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png",
"## Quantized Versions:\n\nGGUF Versions Available Here: URL",
"# How to cite:"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_dir
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0677
- Accuracy: 0.9817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-base", "model-index": [{"name": "output_dir", "results": []}]} | tralon/test-v1-acc | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:16:23+00:00 | [] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# output_dir
This model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0677
- Accuracy: 0.9817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# output_dir\n\nThis model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0677\n- Accuracy: 0.9817",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# output_dir\n\nThis model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0677\n- Accuracy: 0.9817",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/oy0geyg | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:17:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
</div></center>
May 1st 2024: GGUF have been fixed with [this PR of llama.cpp](https://github.com/ggerganov/llama.cpp/pull/6920)
This model uses the Llama3 **prompting format**
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
## Credits:
- Undi
- IkariDev
## Description
This repo contains GGUF files of Lumimaid-8B-v0.1.
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-GGUF) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-GGUF) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt-GGUF)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
- Airoboros (reduced)
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek | {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw"]} | NeverSleep/Llama-3-Lumimaid-8B-v0.1-GGUF | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-30T00:18:41+00:00 | [] | [] | TAGS
#gguf #not-for-all-audiences #nsfw #license-cc-by-nc-4.0 #region-us
|
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="URL style="display: block; margin: auto;">
</div></center>
May 1st 2024: GGUF have been fixed with this PR of URL
This model uses the Llama3 prompting format
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.
## Credits:
- Undi
- IkariDev
## Description
This repo contains GGUF files of Lumimaid-8B-v0.1.
Switch: 8B - 70B - 70B-alt
## Training data used:
- Aesir datasets
- NoRobots
- limarp - 8k ctx
- toxic-dpo-v0.1-sharegpt
- ToxicQAFinal
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)
- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)
- cgato/SlimOrcaDedupCleaned - 5% (randomly)
- Airoboros (reduced)
- Capybara (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
## Others
Undi: If you want to support us, you can here.
IkariDev: Visit my retro/neocities style website please kek | [
"## Lumimaid 0.1\n\n<center><div style=\"width: 100%;\">\n <img src=\"URL style=\"display: block; margin: auto;\">\n</div></center>\n\nMay 1st 2024: GGUF have been fixed with this PR of URL\n\nThis model uses the Llama3 prompting format\n\nLlama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.\n\nWe also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.\n\nThis model includes the new Luminae dataset from Ikari.\n\n\nIf you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\nThis repo contains GGUF files of Lumimaid-8B-v0.1.\n\nSwitch: 8B - 70B - 70B-alt",
"## Training data used:\n- Aesir datasets\n- NoRobots\n- limarp - 8k ctx\n- toxic-dpo-v0.1-sharegpt\n- ToxicQAFinal\n- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset\n- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)\n- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)\n- cgato/SlimOrcaDedupCleaned - 5% (randomly)\n- Airoboros (reduced)\n- Capybara (reduced)",
"## Models used (only for 8B)\n\n- Initial LumiMaid 8B Finetune\n- Undi95/Llama-3-Unholy-8B-e4\n- Undi95/Llama-3-LewdPlay-8B",
"## Prompt template: Llama3",
"## Others\n\nUndi: If you want to support us, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] | [
"TAGS\n#gguf #not-for-all-audiences #nsfw #license-cc-by-nc-4.0 #region-us \n",
"## Lumimaid 0.1\n\n<center><div style=\"width: 100%;\">\n <img src=\"URL style=\"display: block; margin: auto;\">\n</div></center>\n\nMay 1st 2024: GGUF have been fixed with this PR of URL\n\nThis model uses the Llama3 prompting format\n\nLlama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.\n\nWe also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.\n\nThis model includes the new Luminae dataset from Ikari.\n\n\nIf you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\nThis repo contains GGUF files of Lumimaid-8B-v0.1.\n\nSwitch: 8B - 70B - 70B-alt",
"## Training data used:\n- Aesir datasets\n- NoRobots\n- limarp - 8k ctx\n- toxic-dpo-v0.1-sharegpt\n- ToxicQAFinal\n- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset\n- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)\n- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)\n- cgato/SlimOrcaDedupCleaned - 5% (randomly)\n- Airoboros (reduced)\n- Capybara (reduced)",
"## Models used (only for 8B)\n\n- Initial LumiMaid 8B Finetune\n- Undi95/Llama-3-Unholy-8B-e4\n- Undi95/Llama-3-LewdPlay-8B",
"## Prompt template: Llama3",
"## Others\n\nUndi: If you want to support us, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA6
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4684 | 0.09 | 10 | 0.2658 |
| 0.176 | 0.18 | 20 | 0.1586 |
| 0.1482 | 0.27 | 30 | 0.1648 |
| 0.1537 | 0.36 | 40 | 0.1560 |
| 0.1513 | 0.45 | 50 | 0.1532 |
| 0.1524 | 0.54 | 60 | 0.1483 |
| 0.1561 | 0.63 | 70 | 0.1699 |
| 0.3903 | 0.73 | 80 | 2.6327 |
| 1.692 | 0.82 | 90 | 0.1612 |
| 3.4774 | 0.91 | 100 | 0.5470 |
| 0.5662 | 1.0 | 110 | 0.3429 |
| 2.9804 | 1.09 | 120 | 1.6472 |
| 3.087 | 1.18 | 130 | 3.2248 |
| 1.3303 | 1.27 | 140 | 1.4770 |
| 1.4895 | 1.36 | 150 | 0.5381 |
| 0.3974 | 1.45 | 160 | 0.3217 |
| 0.4325 | 1.54 | 170 | 0.9128 |
| 0.4121 | 1.63 | 180 | 0.2806 |
| 0.2431 | 1.72 | 190 | 0.2329 |
| 0.216 | 1.81 | 200 | 0.2063 |
| 0.2069 | 1.9 | 210 | 0.2249 |
| 0.2153 | 1.99 | 220 | 0.1920 |
| 0.1906 | 2.08 | 230 | 0.2070 |
| 0.1864 | 2.18 | 240 | 0.1844 |
| 0.1734 | 2.27 | 250 | 0.1766 |
| 0.1638 | 2.36 | 260 | 0.1677 |
| 0.1545 | 2.45 | 270 | 0.1604 |
| 0.1572 | 2.54 | 280 | 0.1569 |
| 0.1508 | 2.63 | 290 | 0.1577 |
| 0.1497 | 2.72 | 300 | 0.1537 |
| 0.1494 | 2.81 | 310 | 0.1526 |
| 0.148 | 2.9 | 320 | 0.1540 |
| 0.1483 | 2.99 | 330 | 0.1537 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA6", "results": []}]} | Litzy619/O0428HMA6 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T00:19:09+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA6
=========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1537
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 80
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Slavator096/saiga-phi-3-mini-4k-4bit | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-04-30T00:20:10+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4014
- F1 Score: 0.8146
- Accuracy: 0.8147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5133 | 0.54 | 200 | 0.4592 | 0.7828 | 0.7828 |
| 0.4568 | 1.08 | 400 | 0.4803 | 0.7788 | 0.7807 |
| 0.4352 | 1.62 | 600 | 0.4476 | 0.7964 | 0.7968 |
| 0.4352 | 2.16 | 800 | 0.4471 | 0.7940 | 0.7944 |
| 0.429 | 2.7 | 1000 | 0.4445 | 0.7918 | 0.7926 |
| 0.4264 | 3.24 | 1200 | 0.4352 | 0.7975 | 0.7978 |
| 0.4206 | 3.78 | 1400 | 0.4419 | 0.7950 | 0.7958 |
| 0.4167 | 4.32 | 1600 | 0.4359 | 0.7973 | 0.7976 |
| 0.4176 | 4.86 | 1800 | 0.4448 | 0.7906 | 0.7914 |
| 0.4136 | 5.41 | 2000 | 0.4411 | 0.7937 | 0.7943 |
| 0.4132 | 5.95 | 2200 | 0.4335 | 0.8037 | 0.8037 |
| 0.4161 | 6.49 | 2400 | 0.4319 | 0.7983 | 0.7986 |
| 0.4083 | 7.03 | 2600 | 0.4317 | 0.8021 | 0.8024 |
| 0.4076 | 7.57 | 2800 | 0.4379 | 0.8039 | 0.8041 |
| 0.4034 | 8.11 | 3000 | 0.4332 | 0.7990 | 0.7993 |
| 0.4005 | 8.65 | 3200 | 0.4300 | 0.8031 | 0.8034 |
| 0.404 | 9.19 | 3400 | 0.4298 | 0.8051 | 0.8052 |
| 0.4002 | 9.73 | 3600 | 0.4303 | 0.8009 | 0.8014 |
| 0.4015 | 10.27 | 3800 | 0.4387 | 0.8017 | 0.8024 |
| 0.3978 | 10.81 | 4000 | 0.4292 | 0.8063 | 0.8063 |
| 0.3968 | 11.35 | 4200 | 0.4282 | 0.8048 | 0.8049 |
| 0.3967 | 11.89 | 4400 | 0.4273 | 0.8030 | 0.8034 |
| 0.3968 | 12.43 | 4600 | 0.4212 | 0.8062 | 0.8063 |
| 0.3918 | 12.97 | 4800 | 0.4287 | 0.8065 | 0.8068 |
| 0.3903 | 13.51 | 5000 | 0.4353 | 0.8031 | 0.8039 |
| 0.3909 | 14.05 | 5200 | 0.4239 | 0.8097 | 0.8098 |
| 0.3857 | 14.59 | 5400 | 0.4272 | 0.8100 | 0.8101 |
| 0.3881 | 15.14 | 5600 | 0.4208 | 0.8090 | 0.8090 |
| 0.3866 | 15.68 | 5800 | 0.4341 | 0.8037 | 0.8047 |
| 0.3806 | 16.22 | 6000 | 0.4351 | 0.8069 | 0.8076 |
| 0.3866 | 16.76 | 6200 | 0.4203 | 0.8104 | 0.8108 |
| 0.3855 | 17.3 | 6400 | 0.4226 | 0.8111 | 0.8113 |
| 0.3857 | 17.84 | 6600 | 0.4195 | 0.8129 | 0.8130 |
| 0.3796 | 18.38 | 6800 | 0.4341 | 0.8066 | 0.8074 |
| 0.3809 | 18.92 | 7000 | 0.4217 | 0.8105 | 0.8110 |
| 0.3761 | 19.46 | 7200 | 0.4247 | 0.8125 | 0.8128 |
| 0.3828 | 20.0 | 7400 | 0.4229 | 0.8108 | 0.8111 |
| 0.3771 | 20.54 | 7600 | 0.4216 | 0.8136 | 0.8139 |
| 0.3756 | 21.08 | 7800 | 0.4206 | 0.8141 | 0.8142 |
| 0.377 | 21.62 | 8000 | 0.4158 | 0.8137 | 0.8139 |
| 0.3748 | 22.16 | 8200 | 0.4206 | 0.8141 | 0.8144 |
| 0.3794 | 22.7 | 8400 | 0.4234 | 0.8124 | 0.8130 |
| 0.3706 | 23.24 | 8600 | 0.4165 | 0.8125 | 0.8127 |
| 0.3745 | 23.78 | 8800 | 0.4202 | 0.8115 | 0.8118 |
| 0.374 | 24.32 | 9000 | 0.4177 | 0.8136 | 0.8139 |
| 0.3718 | 24.86 | 9200 | 0.4182 | 0.8136 | 0.8139 |
| 0.382 | 25.41 | 9400 | 0.4185 | 0.8148 | 0.8150 |
| 0.3678 | 25.95 | 9600 | 0.4189 | 0.8138 | 0.8140 |
| 0.3736 | 26.49 | 9800 | 0.4201 | 0.8129 | 0.8132 |
| 0.3711 | 27.03 | 10000 | 0.4198 | 0.8132 | 0.8135 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:20:16+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_16384\_512\_56M-L8\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4014
* F1 Score: 0.8146
* Accuracy: 0.8147
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/v12ofth | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:22:01+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | zinoli/textimage | null | [
"transformers",
"safetensors",
"blip",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:24:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_weight
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1765
- Wer: 0.1401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 15.0719 | 1.3928 | 500 | 4.8260 | 1.0 |
| 4.4273 | 2.7855 | 1000 | 4.6865 | 0.9991 |
| 3.9296 | 4.1783 | 1500 | 4.2965 | 0.9992 |
| 3.4964 | 5.5710 | 2000 | 2.6642 | 0.9583 |
| 2.8184 | 6.9638 | 2500 | 1.7146 | 0.8718 |
| 2.132 | 8.3565 | 3000 | 1.4549 | 0.7103 |
| 1.7481 | 9.7493 | 3500 | 0.9072 | 0.5730 |
| 1.5776 | 11.1421 | 4000 | 0.7414 | 0.5132 |
| 1.3743 | 12.5348 | 4500 | 0.6621 | 0.4089 |
| 1.2417 | 13.9276 | 5000 | 0.4884 | 0.3854 |
| 1.1375 | 15.3203 | 5500 | 0.3561 | 0.3123 |
| 1.0412 | 16.7131 | 6000 | 0.3344 | 0.2945 |
| 0.981 | 18.1058 | 6500 | 0.3063 | 0.2667 |
| 0.9913 | 19.4986 | 7000 | 0.2778 | 0.2244 |
| 0.861 | 20.8914 | 7500 | 0.2511 | 0.2170 |
| 0.8314 | 22.2841 | 8000 | 0.2498 | 0.2127 |
| 0.8669 | 23.6769 | 8500 | 0.2452 | 0.2048 |
| 0.8003 | 25.0696 | 9000 | 0.2251 | 0.1830 |
| 0.7409 | 26.4624 | 9500 | 0.2292 | 0.1820 |
| 0.7282 | 27.8552 | 10000 | 0.2130 | 0.1681 |
| 0.7675 | 29.2479 | 10500 | 0.2290 | 0.1796 |
| 0.7295 | 30.6407 | 11000 | 0.1971 | 0.1617 |
| 0.6308 | 32.0334 | 11500 | 0.2032 | 0.1555 |
| 0.6251 | 33.4262 | 12000 | 0.1905 | 0.1515 |
| 0.5887 | 34.8189 | 12500 | 0.1844 | 0.1481 |
| 0.6642 | 36.2117 | 13000 | 0.1796 | 0.1444 |
| 0.6068 | 37.6045 | 13500 | 0.1808 | 0.1417 |
| 0.5862 | 38.9972 | 14000 | 0.1765 | 0.1401 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_11_0"], "metrics": ["wer"], "base_model": "nguyenvulebinh/wav2vec2-base-vietnamese-250h", "model-index": [{"name": "model_weight", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_11_0", "type": "common_voice_11_0", "config": "vi", "split": "None", "args": "vi"}, "metrics": [{"type": "wer", "value": 0.14013683555810727, "name": "Wer"}]}]}]} | AnhDuc2507/model_weight | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:25:02+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-nguyenvulebinh/wav2vec2-base-vietnamese-250h #license-cc-by-nc-4.0 #model-index #endpoints_compatible #region-us
| model\_weight
=============
This model is a fine-tuned version of nguyenvulebinh/wav2vec2-base-vietnamese-250h on the common\_voice\_11\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1765
* Wer: 0.1401
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 40
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 40\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-nguyenvulebinh/wav2vec2-base-vietnamese-250h #license-cc-by-nc-4.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 40\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Uploaded model
- **Developed by:** robgonsalves
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | robgonsalves/fan-fabler-lora | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:25:24+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: robgonsalves
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: robgonsalves\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: robgonsalves\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model12 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:26:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF)
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
| Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
</font>
**Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath
**Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128
**Discord**: https://discord.gg/VZjjHtWrKs
## Comparing WizardMath-V1.0 with Other LLMs.
🔥 The following figure shows that our **WizardMath-70B-V1.0 attains the fifth position in this benchmark**, surpassing ChatGPT (81.6 vs. 80.8) , Claude Instant (81.6 vs. 80.9), PaLM 2 540B (81.6 vs. 80.7).
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardMath/images/wizardmath_gsm8k.png" alt="WizardMath" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
❗<b>Note for model system prompts usage:</b>
Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**.
**Default version:**
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
```
**CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.)
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
```
## Inference WizardMath Demo Script
We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
❗<b>To commen concern about dataset:</b>
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.
Thank you for your understanding.
## Citation
Please cite the repo if you use the data, method or code in this repo.
```
@article{luo2023wizardmath,
title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct},
author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei},
journal={arXiv preprint arXiv:2308.09583},
year={2023}
}
```
| {"license": "llama2"} | DTang161/ModelMergingMath | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:27:29+00:00 | [
"2304.12244",
"2306.08568",
"2308.09583"
] | [] | TAGS
#transformers #pytorch #llama #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF)
------------------------------------------------------------------------------------------------------------
[HF Repo](URL target=) • [Github Repo](URL target=) • [Twitter](URL target=) • [[WizardLM]](URL target=) • [[WizardCoder]](URL target=) • [[WizardMath]](URL target=)
Join our [Discord](URL target=)
Github Repo: URL
Twitter: URL
Discord: URL
Comparing WizardMath-V1.0 with Other LLMs.
------------------------------------------
The following figure shows that our WizardMath-70B-V1.0 attains the fifth position in this benchmark, surpassing ChatGPT (81.6 vs. 80.8) , Claude Instant (81.6 vs. 80.9), PaLM 2 540B (81.6 vs. 80.7).

**Note for model system prompts usage:**
Please use the same systems prompts strictly with us, and we do not guarantee the accuracy of the quantified versions.
Default version:
CoT Version: (For the simple math questions, we do NOT recommend to use the CoT prompt.)
Inference WizardMath Demo Script
--------------------------------
We provide the WizardMath inference demo code here.
**To commen concern about dataset:**
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.
Thank you for your understanding.
Please cite the repo if you use the data, method or code in this repo.
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4008
- F1 Score: 0.8262
- Accuracy: 0.8262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4972 | 0.54 | 200 | 0.4505 | 0.7893 | 0.7894 |
| 0.4442 | 1.08 | 400 | 0.4981 | 0.7704 | 0.7740 |
| 0.4259 | 1.62 | 600 | 0.4506 | 0.7900 | 0.7909 |
| 0.4271 | 2.16 | 800 | 0.4513 | 0.7886 | 0.7895 |
| 0.4187 | 2.7 | 1000 | 0.4498 | 0.7882 | 0.7899 |
| 0.4158 | 3.24 | 1200 | 0.4313 | 0.7988 | 0.7992 |
| 0.4093 | 3.78 | 1400 | 0.4337 | 0.7991 | 0.7995 |
| 0.404 | 4.32 | 1600 | 0.4320 | 0.8000 | 0.8002 |
| 0.4036 | 4.86 | 1800 | 0.4379 | 0.8020 | 0.8025 |
| 0.3982 | 5.41 | 2000 | 0.4342 | 0.8035 | 0.8041 |
| 0.398 | 5.95 | 2200 | 0.4342 | 0.8074 | 0.8074 |
| 0.3955 | 6.49 | 2400 | 0.4251 | 0.8072 | 0.8074 |
| 0.3886 | 7.03 | 2600 | 0.4234 | 0.8097 | 0.8100 |
| 0.3826 | 7.57 | 2800 | 0.4269 | 0.8103 | 0.8103 |
| 0.3799 | 8.11 | 3000 | 0.4224 | 0.8133 | 0.8135 |
| 0.3728 | 8.65 | 3200 | 0.4192 | 0.8101 | 0.8105 |
| 0.3769 | 9.19 | 3400 | 0.4212 | 0.8118 | 0.8118 |
| 0.3696 | 9.73 | 3600 | 0.4196 | 0.8090 | 0.8095 |
| 0.3725 | 10.27 | 3800 | 0.4307 | 0.8116 | 0.8122 |
| 0.3673 | 10.81 | 4000 | 0.4179 | 0.8137 | 0.8137 |
| 0.3642 | 11.35 | 4200 | 0.4145 | 0.8148 | 0.8149 |
| 0.3609 | 11.89 | 4400 | 0.4178 | 0.8116 | 0.8120 |
| 0.3606 | 12.43 | 4600 | 0.4125 | 0.8179 | 0.8179 |
| 0.355 | 12.97 | 4800 | 0.4231 | 0.8177 | 0.8179 |
| 0.3527 | 13.51 | 5000 | 0.4256 | 0.8098 | 0.8106 |
| 0.3525 | 14.05 | 5200 | 0.4181 | 0.8236 | 0.8236 |
| 0.3434 | 14.59 | 5400 | 0.4266 | 0.8154 | 0.8155 |
| 0.3489 | 15.14 | 5600 | 0.4214 | 0.8196 | 0.8196 |
| 0.3427 | 15.68 | 5800 | 0.4301 | 0.8100 | 0.8108 |
| 0.3406 | 16.22 | 6000 | 0.4284 | 0.8143 | 0.8149 |
| 0.3425 | 16.76 | 6200 | 0.4204 | 0.8151 | 0.8155 |
| 0.3418 | 17.3 | 6400 | 0.4250 | 0.8158 | 0.8160 |
| 0.3413 | 17.84 | 6600 | 0.4150 | 0.8210 | 0.8211 |
| 0.332 | 18.38 | 6800 | 0.4402 | 0.8123 | 0.8130 |
| 0.3378 | 18.92 | 7000 | 0.4199 | 0.8136 | 0.8140 |
| 0.3301 | 19.46 | 7200 | 0.4295 | 0.8132 | 0.8137 |
| 0.3401 | 20.0 | 7400 | 0.4261 | 0.8165 | 0.8169 |
| 0.3304 | 20.54 | 7600 | 0.4241 | 0.8165 | 0.8167 |
| 0.3289 | 21.08 | 7800 | 0.4241 | 0.8201 | 0.8203 |
| 0.3301 | 21.62 | 8000 | 0.4173 | 0.8192 | 0.8193 |
| 0.3279 | 22.16 | 8200 | 0.4239 | 0.8175 | 0.8177 |
| 0.3311 | 22.7 | 8400 | 0.4284 | 0.8147 | 0.8152 |
| 0.3215 | 23.24 | 8600 | 0.4224 | 0.8182 | 0.8184 |
| 0.3276 | 23.78 | 8800 | 0.4259 | 0.8163 | 0.8166 |
| 0.3254 | 24.32 | 9000 | 0.4199 | 0.8193 | 0.8194 |
| 0.322 | 24.86 | 9200 | 0.4241 | 0.8165 | 0.8167 |
| 0.3309 | 25.41 | 9400 | 0.4236 | 0.8192 | 0.8194 |
| 0.3178 | 25.95 | 9600 | 0.4281 | 0.8190 | 0.8193 |
| 0.3227 | 26.49 | 9800 | 0.4281 | 0.8173 | 0.8176 |
| 0.3217 | 27.03 | 10000 | 0.4272 | 0.8185 | 0.8187 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:27:33+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_16384\_512\_56M-L32\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4008
* F1 Score: 0.8262
* Accuracy: 0.8262
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jmodel/gemma-2b-sparseft-uot-zh-sparse-gemma-2b-top-bot-8230-with-idx__checkpoint-25000 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:27:57+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3898
- F1 Score: 0.8280
- Accuracy: 0.8283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5257 | 0.6 | 200 | 0.4382 | 0.8009 | 0.8020 |
| 0.4489 | 1.2 | 400 | 0.4074 | 0.8185 | 0.8189 |
| 0.4272 | 1.81 | 600 | 0.4148 | 0.8123 | 0.8136 |
| 0.4219 | 2.41 | 800 | 0.3961 | 0.8246 | 0.8253 |
| 0.4089 | 3.01 | 1000 | 0.3947 | 0.8254 | 0.8263 |
| 0.4008 | 3.61 | 1200 | 0.3873 | 0.8285 | 0.8295 |
| 0.4024 | 4.22 | 1400 | 0.3727 | 0.8346 | 0.8347 |
| 0.4001 | 4.82 | 1600 | 0.3782 | 0.8310 | 0.8315 |
| 0.3864 | 5.42 | 1800 | 0.3705 | 0.8337 | 0.8338 |
| 0.3915 | 6.02 | 2000 | 0.3747 | 0.8316 | 0.8321 |
| 0.3917 | 6.63 | 2200 | 0.3744 | 0.8342 | 0.8347 |
| 0.3872 | 7.23 | 2400 | 0.3805 | 0.8349 | 0.8357 |
| 0.3875 | 7.83 | 2600 | 0.3697 | 0.8366 | 0.8368 |
| 0.3804 | 8.43 | 2800 | 0.3866 | 0.8310 | 0.8321 |
| 0.384 | 9.04 | 3000 | 0.3745 | 0.8372 | 0.8378 |
| 0.3818 | 9.64 | 3200 | 0.3634 | 0.8378 | 0.8379 |
| 0.3826 | 10.24 | 3400 | 0.3642 | 0.8381 | 0.8381 |
| 0.3772 | 10.84 | 3600 | 0.3673 | 0.8377 | 0.8379 |
| 0.3796 | 11.45 | 3800 | 0.3687 | 0.8367 | 0.8370 |
| 0.3786 | 12.05 | 4000 | 0.3636 | 0.8388 | 0.8389 |
| 0.3749 | 12.65 | 4200 | 0.3717 | 0.8385 | 0.8389 |
| 0.3801 | 13.25 | 4400 | 0.3703 | 0.8386 | 0.8389 |
| 0.3793 | 13.86 | 4600 | 0.3731 | 0.8362 | 0.8366 |
| 0.3751 | 14.46 | 4800 | 0.3712 | 0.8383 | 0.8387 |
| 0.3733 | 15.06 | 5000 | 0.3686 | 0.8376 | 0.8379 |
| 0.3718 | 15.66 | 5200 | 0.3680 | 0.8390 | 0.8393 |
| 0.3707 | 16.27 | 5400 | 0.3698 | 0.8363 | 0.8364 |
| 0.3744 | 16.87 | 5600 | 0.3779 | 0.8353 | 0.8361 |
| 0.3702 | 17.47 | 5800 | 0.3764 | 0.8354 | 0.8361 |
| 0.3768 | 18.07 | 6000 | 0.3744 | 0.8356 | 0.8363 |
| 0.3691 | 18.67 | 6200 | 0.3696 | 0.8371 | 0.8376 |
| 0.3716 | 19.28 | 6400 | 0.3689 | 0.8384 | 0.8387 |
| 0.3714 | 19.88 | 6600 | 0.3677 | 0.8389 | 0.8393 |
| 0.3656 | 20.48 | 6800 | 0.3716 | 0.8386 | 0.8391 |
| 0.3769 | 21.08 | 7000 | 0.3720 | 0.8380 | 0.8385 |
| 0.3713 | 21.69 | 7200 | 0.3658 | 0.8380 | 0.8381 |
| 0.3643 | 22.29 | 7400 | 0.3701 | 0.8381 | 0.8385 |
| 0.3639 | 22.89 | 7600 | 0.3688 | 0.8375 | 0.8378 |
| 0.3683 | 23.49 | 7800 | 0.3694 | 0.8368 | 0.8372 |
| 0.3713 | 24.1 | 8000 | 0.3744 | 0.8360 | 0.8366 |
| 0.369 | 24.7 | 8200 | 0.3756 | 0.8355 | 0.8363 |
| 0.3636 | 25.3 | 8400 | 0.3734 | 0.8378 | 0.8383 |
| 0.3684 | 25.9 | 8600 | 0.3672 | 0.8393 | 0.8396 |
| 0.3683 | 26.51 | 8800 | 0.3667 | 0.8390 | 0.8393 |
| 0.3685 | 27.11 | 9000 | 0.3670 | 0.8378 | 0.8381 |
| 0.3704 | 27.71 | 9200 | 0.3696 | 0.8387 | 0.8391 |
| 0.3613 | 28.31 | 9400 | 0.3734 | 0.8404 | 0.8410 |
| 0.3645 | 28.92 | 9600 | 0.3702 | 0.8387 | 0.8391 |
| 0.3687 | 29.52 | 9800 | 0.3696 | 0.8379 | 0.8383 |
| 0.362 | 30.12 | 10000 | 0.3701 | 0.8377 | 0.8381 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:28:33+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_16384\_512\_56M-L1\_f
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3898
* F1 Score: 0.8280
* Accuracy: 0.8283
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3771
- F1 Score: 0.8314
- Accuracy: 0.8315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5004 | 0.6 | 200 | 0.4030 | 0.8186 | 0.8185 |
| 0.4219 | 1.2 | 400 | 0.3836 | 0.8322 | 0.8323 |
| 0.4022 | 1.81 | 600 | 0.4021 | 0.8193 | 0.8206 |
| 0.3967 | 2.41 | 800 | 0.3804 | 0.8310 | 0.8315 |
| 0.3881 | 3.01 | 1000 | 0.3825 | 0.8328 | 0.8336 |
| 0.3808 | 3.61 | 1200 | 0.3737 | 0.8365 | 0.8370 |
| 0.3844 | 4.22 | 1400 | 0.3676 | 0.8373 | 0.8376 |
| 0.3828 | 4.82 | 1600 | 0.3731 | 0.8351 | 0.8357 |
| 0.3695 | 5.42 | 1800 | 0.3655 | 0.8349 | 0.8349 |
| 0.3772 | 6.02 | 2000 | 0.3720 | 0.8348 | 0.8353 |
| 0.3748 | 6.63 | 2200 | 0.3661 | 0.8386 | 0.8389 |
| 0.3712 | 7.23 | 2400 | 0.3770 | 0.8335 | 0.8342 |
| 0.3703 | 7.83 | 2600 | 0.3631 | 0.8372 | 0.8372 |
| 0.364 | 8.43 | 2800 | 0.3732 | 0.8337 | 0.8344 |
| 0.367 | 9.04 | 3000 | 0.3682 | 0.8380 | 0.8383 |
| 0.3641 | 9.64 | 3200 | 0.3623 | 0.8408 | 0.8410 |
| 0.364 | 10.24 | 3400 | 0.3621 | 0.8420 | 0.8421 |
| 0.3584 | 10.84 | 3600 | 0.3608 | 0.8401 | 0.8402 |
| 0.361 | 11.45 | 3800 | 0.3622 | 0.8387 | 0.8389 |
| 0.3587 | 12.05 | 4000 | 0.3606 | 0.8408 | 0.8408 |
| 0.3552 | 12.65 | 4200 | 0.3658 | 0.8380 | 0.8381 |
| 0.3597 | 13.25 | 4400 | 0.3650 | 0.8405 | 0.8408 |
| 0.3595 | 13.86 | 4600 | 0.3695 | 0.8398 | 0.8400 |
| 0.3545 | 14.46 | 4800 | 0.3677 | 0.8418 | 0.8421 |
| 0.3524 | 15.06 | 5000 | 0.3585 | 0.8401 | 0.8402 |
| 0.348 | 15.66 | 5200 | 0.3656 | 0.8401 | 0.8404 |
| 0.3488 | 16.27 | 5400 | 0.3660 | 0.8415 | 0.8415 |
| 0.3524 | 16.87 | 5600 | 0.3704 | 0.8387 | 0.8393 |
| 0.3481 | 17.47 | 5800 | 0.3650 | 0.8389 | 0.8393 |
| 0.354 | 18.07 | 6000 | 0.3649 | 0.8400 | 0.8404 |
| 0.3456 | 18.67 | 6200 | 0.3624 | 0.8410 | 0.8413 |
| 0.3488 | 19.28 | 6400 | 0.3655 | 0.8417 | 0.8419 |
| 0.3468 | 19.88 | 6600 | 0.3603 | 0.8444 | 0.8447 |
| 0.3426 | 20.48 | 6800 | 0.3604 | 0.8425 | 0.8427 |
| 0.3518 | 21.08 | 7000 | 0.3656 | 0.8402 | 0.8406 |
| 0.3481 | 21.69 | 7200 | 0.3630 | 0.8407 | 0.8408 |
| 0.3377 | 22.29 | 7400 | 0.3635 | 0.8416 | 0.8419 |
| 0.3385 | 22.89 | 7600 | 0.3657 | 0.8401 | 0.8404 |
| 0.343 | 23.49 | 7800 | 0.3638 | 0.8421 | 0.8423 |
| 0.3446 | 24.1 | 8000 | 0.3695 | 0.8382 | 0.8387 |
| 0.3432 | 24.7 | 8200 | 0.3689 | 0.8390 | 0.8395 |
| 0.3386 | 25.3 | 8400 | 0.3676 | 0.8412 | 0.8415 |
| 0.3429 | 25.9 | 8600 | 0.3610 | 0.8421 | 0.8423 |
| 0.3422 | 26.51 | 8800 | 0.3625 | 0.8419 | 0.8421 |
| 0.3412 | 27.11 | 9000 | 0.3596 | 0.8420 | 0.8421 |
| 0.3423 | 27.71 | 9200 | 0.3646 | 0.8412 | 0.8415 |
| 0.3335 | 28.31 | 9400 | 0.3675 | 0.8409 | 0.8413 |
| 0.3378 | 28.92 | 9600 | 0.3646 | 0.8417 | 0.8419 |
| 0.3408 | 29.52 | 9800 | 0.3639 | 0.8411 | 0.8413 |
| 0.3342 | 30.12 | 10000 | 0.3644 | 0.8413 | 0.8415 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:28:49+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_16384\_512\_56M-L8\_f
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3771
* F1 Score: 0.8314
* Accuracy: 0.8315
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | This is the merged Llama 3 8B 1M base model obtained from merging the Llama 3 8B model with the LoRA extracted from Gradient AI's 1M context length Instruct model https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k
The LoRA adapter is available @ https://huggingface.co/winglian/llama-3-1m-context-gradient-lora
| {} | winglian/Meta-Llama-3-8B-1M | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:29:50+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This is the merged Llama 3 8B 1M base model obtained from merging the Llama 3 8B model with the LoRA extracted from Gradient AI's 1M context length Instruct model URL
The LoRA adapter is available @ URL
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4010
- F1 Score: 0.8328
- Accuracy: 0.8329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4809 | 0.6 | 200 | 0.3947 | 0.8277 | 0.8280 |
| 0.4065 | 1.2 | 400 | 0.3726 | 0.8370 | 0.8370 |
| 0.3919 | 1.81 | 600 | 0.3891 | 0.8299 | 0.8308 |
| 0.3855 | 2.41 | 800 | 0.3781 | 0.8352 | 0.8357 |
| 0.3798 | 3.01 | 1000 | 0.3790 | 0.8346 | 0.8353 |
| 0.3714 | 3.61 | 1200 | 0.3723 | 0.8345 | 0.8351 |
| 0.3753 | 4.22 | 1400 | 0.3640 | 0.8388 | 0.8391 |
| 0.373 | 4.82 | 1600 | 0.3716 | 0.8373 | 0.8379 |
| 0.358 | 5.42 | 1800 | 0.3655 | 0.8395 | 0.8395 |
| 0.3666 | 6.02 | 2000 | 0.3715 | 0.8375 | 0.8381 |
| 0.3612 | 6.63 | 2200 | 0.3613 | 0.8417 | 0.8419 |
| 0.3546 | 7.23 | 2400 | 0.3842 | 0.8377 | 0.8383 |
| 0.3563 | 7.83 | 2600 | 0.3621 | 0.8402 | 0.8402 |
| 0.348 | 8.43 | 2800 | 0.3721 | 0.8371 | 0.8378 |
| 0.3522 | 9.04 | 3000 | 0.3716 | 0.8342 | 0.8347 |
| 0.3464 | 9.64 | 3200 | 0.3678 | 0.8426 | 0.8428 |
| 0.3449 | 10.24 | 3400 | 0.3635 | 0.8426 | 0.8427 |
| 0.3399 | 10.84 | 3600 | 0.3582 | 0.8407 | 0.8408 |
| 0.3407 | 11.45 | 3800 | 0.3623 | 0.8407 | 0.8410 |
| 0.3368 | 12.05 | 4000 | 0.3619 | 0.8402 | 0.8402 |
| 0.3312 | 12.65 | 4200 | 0.3660 | 0.8427 | 0.8428 |
| 0.3356 | 13.25 | 4400 | 0.3635 | 0.8416 | 0.8417 |
| 0.3365 | 13.86 | 4600 | 0.3693 | 0.8425 | 0.8427 |
| 0.3288 | 14.46 | 4800 | 0.3708 | 0.8414 | 0.8417 |
| 0.3253 | 15.06 | 5000 | 0.3565 | 0.8430 | 0.8430 |
| 0.3201 | 15.66 | 5200 | 0.3701 | 0.8396 | 0.8398 |
| 0.3206 | 16.27 | 5400 | 0.3743 | 0.8422 | 0.8423 |
| 0.3232 | 16.87 | 5600 | 0.3850 | 0.8378 | 0.8385 |
| 0.3173 | 17.47 | 5800 | 0.3726 | 0.8410 | 0.8413 |
| 0.3252 | 18.07 | 6000 | 0.3670 | 0.8400 | 0.8404 |
| 0.3135 | 18.67 | 6200 | 0.3647 | 0.8458 | 0.8461 |
| 0.3139 | 19.28 | 6400 | 0.3763 | 0.8458 | 0.8461 |
| 0.3142 | 19.88 | 6600 | 0.3658 | 0.8429 | 0.8432 |
| 0.3083 | 20.48 | 6800 | 0.3661 | 0.8422 | 0.8423 |
| 0.315 | 21.08 | 7000 | 0.3724 | 0.8432 | 0.8434 |
| 0.3122 | 21.69 | 7200 | 0.3734 | 0.8420 | 0.8421 |
| 0.2988 | 22.29 | 7400 | 0.3770 | 0.8409 | 0.8413 |
| 0.3029 | 22.89 | 7600 | 0.3751 | 0.8401 | 0.8404 |
| 0.3061 | 23.49 | 7800 | 0.3732 | 0.8403 | 0.8406 |
| 0.306 | 24.1 | 8000 | 0.3805 | 0.8387 | 0.8391 |
| 0.3016 | 24.7 | 8200 | 0.3812 | 0.8394 | 0.8398 |
| 0.2995 | 25.3 | 8400 | 0.3848 | 0.8397 | 0.8400 |
| 0.3024 | 25.9 | 8600 | 0.3730 | 0.8423 | 0.8425 |
| 0.301 | 26.51 | 8800 | 0.3771 | 0.8431 | 0.8432 |
| 0.3006 | 27.11 | 9000 | 0.3726 | 0.8417 | 0.8419 |
| 0.3033 | 27.71 | 9200 | 0.3766 | 0.8400 | 0.8402 |
| 0.2922 | 28.31 | 9400 | 0.3816 | 0.8408 | 0.8412 |
| 0.2944 | 28.92 | 9600 | 0.3815 | 0.8409 | 0.8412 |
| 0.298 | 29.52 | 9800 | 0.3784 | 0.8394 | 0.8396 |
| 0.2938 | 30.12 | 10000 | 0.3792 | 0.8394 | 0.8396 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:31:19+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_16384\_512\_56M-L32\_f
==============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4010
* F1 Score: 0.8328
* Accuracy: 0.8329
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5719
- F1 Score: 0.8564
- Accuracy: 0.8564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5599 | 5.13 | 200 | 0.5249 | 0.7487 | 0.7488 |
| 0.4807 | 10.26 | 400 | 0.4940 | 0.7606 | 0.7635 |
| 0.4126 | 15.38 | 600 | 0.3923 | 0.8124 | 0.8124 |
| 0.3694 | 20.51 | 800 | 0.3829 | 0.8319 | 0.8320 |
| 0.3393 | 25.64 | 1000 | 0.3755 | 0.8401 | 0.8401 |
| 0.3144 | 30.77 | 1200 | 0.3570 | 0.8449 | 0.8450 |
| 0.2965 | 35.9 | 1400 | 0.3510 | 0.8581 | 0.8581 |
| 0.2756 | 41.03 | 1600 | 0.3610 | 0.8578 | 0.8581 |
| 0.258 | 46.15 | 1800 | 0.3455 | 0.8662 | 0.8662 |
| 0.242 | 51.28 | 2000 | 0.3535 | 0.8580 | 0.8581 |
| 0.2347 | 56.41 | 2200 | 0.3575 | 0.8678 | 0.8679 |
| 0.2228 | 61.54 | 2400 | 0.3646 | 0.8662 | 0.8662 |
| 0.2117 | 66.67 | 2600 | 0.3563 | 0.8711 | 0.8711 |
| 0.2053 | 71.79 | 2800 | 0.3702 | 0.8711 | 0.8711 |
| 0.1951 | 76.92 | 3000 | 0.3824 | 0.8644 | 0.8646 |
| 0.1808 | 82.05 | 3200 | 0.4126 | 0.8628 | 0.8630 |
| 0.1724 | 87.18 | 3400 | 0.4275 | 0.8561 | 0.8564 |
| 0.164 | 92.31 | 3600 | 0.4147 | 0.8678 | 0.8679 |
| 0.1599 | 97.44 | 3800 | 0.4262 | 0.8710 | 0.8711 |
| 0.1537 | 102.56 | 4000 | 0.4390 | 0.8629 | 0.8630 |
| 0.1496 | 107.69 | 4200 | 0.4348 | 0.8662 | 0.8662 |
| 0.1415 | 112.82 | 4400 | 0.4421 | 0.8710 | 0.8711 |
| 0.1345 | 117.95 | 4600 | 0.4459 | 0.8710 | 0.8711 |
| 0.1256 | 123.08 | 4800 | 0.4687 | 0.8710 | 0.8711 |
| 0.1212 | 128.21 | 5000 | 0.4867 | 0.8661 | 0.8662 |
| 0.1203 | 133.33 | 5200 | 0.4864 | 0.8694 | 0.8695 |
| 0.1156 | 138.46 | 5400 | 0.4784 | 0.8695 | 0.8695 |
| 0.1154 | 143.59 | 5600 | 0.4918 | 0.8743 | 0.8744 |
| 0.1124 | 148.72 | 5800 | 0.4743 | 0.8776 | 0.8777 |
| 0.1067 | 153.85 | 6000 | 0.4765 | 0.8777 | 0.8777 |
| 0.1052 | 158.97 | 6200 | 0.5154 | 0.8759 | 0.8760 |
| 0.1009 | 164.1 | 6400 | 0.5184 | 0.8710 | 0.8711 |
| 0.095 | 169.23 | 6600 | 0.5307 | 0.8678 | 0.8679 |
| 0.094 | 174.36 | 6800 | 0.5175 | 0.8677 | 0.8679 |
| 0.0926 | 179.49 | 7000 | 0.5550 | 0.8643 | 0.8646 |
| 0.0968 | 184.62 | 7200 | 0.5285 | 0.8711 | 0.8711 |
| 0.0896 | 189.74 | 7400 | 0.5399 | 0.8710 | 0.8711 |
| 0.0877 | 194.87 | 7600 | 0.5470 | 0.8694 | 0.8695 |
| 0.0826 | 200.0 | 7800 | 0.5788 | 0.8611 | 0.8613 |
| 0.0825 | 205.13 | 8000 | 0.5638 | 0.8627 | 0.8630 |
| 0.0843 | 210.26 | 8200 | 0.5514 | 0.8676 | 0.8679 |
| 0.0787 | 215.38 | 8400 | 0.5756 | 0.8676 | 0.8679 |
| 0.08 | 220.51 | 8600 | 0.5816 | 0.8676 | 0.8679 |
| 0.0763 | 225.64 | 8800 | 0.5837 | 0.8610 | 0.8613 |
| 0.0805 | 230.77 | 9000 | 0.5651 | 0.8792 | 0.8793 |
| 0.0758 | 235.9 | 9200 | 0.5758 | 0.8742 | 0.8744 |
| 0.0784 | 241.03 | 9400 | 0.5818 | 0.8659 | 0.8662 |
| 0.0782 | 246.15 | 9600 | 0.5835 | 0.8676 | 0.8679 |
| 0.0747 | 251.28 | 9800 | 0.5691 | 0.8693 | 0.8695 |
| 0.0793 | 256.41 | 10000 | 0.5701 | 0.8693 | 0.8695 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:33:21+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_16384\_512\_56M-L8\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5719
* F1 Score: 0.8564
* Accuracy: 0.8564
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4086
- F1 Score: 0.8467
- Accuracy: 0.8467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.579 | 5.13 | 200 | 0.5664 | 0.7147 | 0.7194 |
| 0.5264 | 10.26 | 400 | 0.5462 | 0.7293 | 0.7325 |
| 0.4963 | 15.38 | 600 | 0.5508 | 0.7386 | 0.7439 |
| 0.4632 | 20.51 | 800 | 0.4771 | 0.7708 | 0.7716 |
| 0.4267 | 25.64 | 1000 | 0.4270 | 0.7910 | 0.7912 |
| 0.4008 | 30.77 | 1200 | 0.4070 | 0.8204 | 0.8206 |
| 0.3827 | 35.9 | 1400 | 0.4263 | 0.8102 | 0.8108 |
| 0.3694 | 41.03 | 1600 | 0.3982 | 0.8139 | 0.8140 |
| 0.354 | 46.15 | 1800 | 0.3947 | 0.8238 | 0.8238 |
| 0.3433 | 51.28 | 2000 | 0.3895 | 0.8270 | 0.8271 |
| 0.3364 | 56.41 | 2200 | 0.3819 | 0.8401 | 0.8401 |
| 0.333 | 61.54 | 2400 | 0.3784 | 0.8335 | 0.8336 |
| 0.3241 | 66.67 | 2600 | 0.3801 | 0.8532 | 0.8532 |
| 0.3203 | 71.79 | 2800 | 0.3713 | 0.8416 | 0.8418 |
| 0.3139 | 76.92 | 3000 | 0.3740 | 0.8483 | 0.8483 |
| 0.3028 | 82.05 | 3200 | 0.3758 | 0.8580 | 0.8581 |
| 0.2976 | 87.18 | 3400 | 0.3834 | 0.8529 | 0.8532 |
| 0.2929 | 92.31 | 3600 | 0.3634 | 0.8515 | 0.8515 |
| 0.2893 | 97.44 | 3800 | 0.3657 | 0.8499 | 0.8499 |
| 0.2839 | 102.56 | 4000 | 0.3635 | 0.8532 | 0.8532 |
| 0.2832 | 107.69 | 4200 | 0.3600 | 0.8548 | 0.8548 |
| 0.2744 | 112.82 | 4400 | 0.3597 | 0.8580 | 0.8581 |
| 0.2711 | 117.95 | 4600 | 0.3621 | 0.8548 | 0.8548 |
| 0.2619 | 123.08 | 4800 | 0.3714 | 0.8580 | 0.8581 |
| 0.2612 | 128.21 | 5000 | 0.3667 | 0.8563 | 0.8564 |
| 0.2587 | 133.33 | 5200 | 0.3589 | 0.8532 | 0.8532 |
| 0.254 | 138.46 | 5400 | 0.3580 | 0.8548 | 0.8548 |
| 0.2462 | 143.59 | 5600 | 0.3607 | 0.8597 | 0.8597 |
| 0.2484 | 148.72 | 5800 | 0.3655 | 0.8597 | 0.8597 |
| 0.2437 | 153.85 | 6000 | 0.3671 | 0.8532 | 0.8532 |
| 0.239 | 158.97 | 6200 | 0.3703 | 0.8466 | 0.8467 |
| 0.2366 | 164.1 | 6400 | 0.3733 | 0.8531 | 0.8532 |
| 0.235 | 169.23 | 6600 | 0.3692 | 0.8499 | 0.8499 |
| 0.2278 | 174.36 | 6800 | 0.3699 | 0.8515 | 0.8515 |
| 0.2289 | 179.49 | 7000 | 0.3677 | 0.8548 | 0.8548 |
| 0.2289 | 184.62 | 7200 | 0.3776 | 0.8548 | 0.8548 |
| 0.2235 | 189.74 | 7400 | 0.3760 | 0.8499 | 0.8499 |
| 0.2226 | 194.87 | 7600 | 0.3726 | 0.8499 | 0.8499 |
| 0.2196 | 200.0 | 7800 | 0.3752 | 0.8483 | 0.8483 |
| 0.2207 | 205.13 | 8000 | 0.3746 | 0.8466 | 0.8467 |
| 0.2196 | 210.26 | 8200 | 0.3703 | 0.8532 | 0.8532 |
| 0.2161 | 215.38 | 8400 | 0.3726 | 0.8548 | 0.8548 |
| 0.2141 | 220.51 | 8600 | 0.3784 | 0.8532 | 0.8532 |
| 0.2086 | 225.64 | 8800 | 0.3839 | 0.8499 | 0.8499 |
| 0.2128 | 230.77 | 9000 | 0.3819 | 0.8483 | 0.8483 |
| 0.2087 | 235.9 | 9200 | 0.3826 | 0.8531 | 0.8532 |
| 0.2145 | 241.03 | 9400 | 0.3782 | 0.8532 | 0.8532 |
| 0.2137 | 246.15 | 9600 | 0.3797 | 0.8482 | 0.8483 |
| 0.2067 | 251.28 | 9800 | 0.3792 | 0.8483 | 0.8483 |
| 0.2144 | 256.41 | 10000 | 0.3783 | 0.8515 | 0.8515 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:33:21+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_16384\_512\_56M-L1\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4086
* F1 Score: 0.8467
* Accuracy: 0.8467
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | This is the **Full-Weight** of WizardLM-13B V1.2 model, this model is trained from **Llama-2 13b**.
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News
- 🔥🔥🔥[2023/08/26] We released **WizardCoder-Python-34B-V1.0** , which achieves the **73.2 pass@1** and surpasses **GPT4 (2023/03/15)**, **ChatGPT-3.5**, and **Claude2** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
- [2023/06/16] We released **WizardCoder-15B-V1.0** , which surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
| Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
- 🔥 [08/11/2023] We release **WizardMath** Models.
- 🔥 Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
</font>
**Repository**: https://github.com/nlpxucan/WizardLM
**Twitter**:
- 🔥🔥🔥 [7/25/2023] We released **WizardLM V1.2** models. The **WizardLM-13B-V1.2** is here ([Demo_13B-V1.2](https://b7a19878988c8c73.gradio.app), [Demo_13B-V1.2_bak-1](https://d0a37a76e0ac4b52.gradio.app/), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.2)). Please checkout the [paper](https://arxiv.org/abs/2304.12244).
- 🔥🔥🔥 [7/25/2023] The **WizardLM-13B-V1.2** achieves **7.06** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **89.17%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **101.4%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
❗<b>Note for model system prompts usage:</b>
<b>WizardLM</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
## Inference WizardLM Demo Script
We provide the inference WizardLM demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
Please cite the paper if you use the data or code from WizardLM.
```
@article{xu2023wizardlm,
title={Wizardlm: Empowering large language models to follow complex instructions},
author={Xu, Can and Sun, Qingfeng and Zheng, Kai and Geng, Xiubo and Zhao, Pu and Feng, Jiazhan and Tao, Chongyang and Jiang, Daxin},
journal={arXiv preprint arXiv:2304.12244},
year={2023}
}
```
❗<b>To commen concern about dataset:</b>
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.
Thank you for your understanding. | {"license": "llama2"} | DTang161/ModelMergingLM | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:37:15+00:00 | [
"2304.12244",
"2306.08568",
"2308.09583"
] | [] | TAGS
#transformers #pytorch #llama #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This is the Full-Weight of WizardLM-13B V1.2 model, this model is trained from Llama-2 13b.
WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
-------------------------------------------------------------------------------------
[HF Repo](URL target=) • [Github Repo](URL target=) • [Twitter](URL target=) • [[WizardLM]](URL target=) • [[WizardCoder]](URL target=) • [[WizardMath]](URL target=)
Join our [Discord](URL target=)
News
----
* [2023/08/26] We released WizardCoder-Python-34B-V1.0 , which achieves the 73.2 pass@1 and surpasses GPT4 (2023/03/15), ChatGPT-3.5, and Claude2 on the HumanEval Benchmarks. For more details, please refer to WizardCoder.
* [2023/06/16] We released WizardCoder-15B-V1.0 , which surpasses Claude-Plus (+6.8), Bard (+15.3) and InstructCodeT5+ (+22.3) on the HumanEval Benchmarks. For more details, please refer to WizardCoder.
* [08/11/2023] We release WizardMath Models.
* Our WizardMath-70B-V1.0 model slightly outperforms some closed-source LLMs on the GSM8K, including ChatGPT 3.5, Claude Instant 1 and PaLM 2 540B.
* Our WizardMath-70B-V1.0 model achieves 81.6 pass@1 on the GSM8k Benchmarks, which is 24.8 points higher than the SOTA open-source LLM.
* Our WizardMath-70B-V1.0 model achieves 22.7 pass@1 on the MATH Benchmarks, which is 9.2 points higher than the SOTA open-source LLM.
Repository: URL
Twitter:
* [7/25/2023] We released WizardLM V1.2 models. The WizardLM-13B-V1.2 is here (Demo\_13B-V1.2, Demo\_13B-V1.2\_bak-1, Full Model Weight). Please checkout the paper.
* [7/25/2023] The WizardLM-13B-V1.2 achieves 7.06 on MT-Bench Leaderboard, 89.17% on AlpacaEval Leaderboard, and 101.4% on WizardLM Eval. (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
**Note for model system prompts usage:**
**WizardLM** adopts the prompt format from **Vicuna** and supports multi-turn conversation. The prompt should be as following:
Inference WizardLM Demo Script
------------------------------
We provide the inference WizardLM demo code here.
Please cite the paper if you use the data or code from WizardLM.
**To commen concern about dataset:**
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.
Thank you for your understanding.
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4512
- F1 Score: 0.8564
- Accuracy: 0.8564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5398 | 5.13 | 200 | 0.4909 | 0.7700 | 0.7700 |
| 0.4278 | 10.26 | 400 | 0.4171 | 0.8099 | 0.8108 |
| 0.3493 | 15.38 | 600 | 0.3599 | 0.8499 | 0.8499 |
| 0.2984 | 20.51 | 800 | 0.3617 | 0.8514 | 0.8515 |
| 0.2562 | 25.64 | 1000 | 0.3688 | 0.8581 | 0.8581 |
| 0.2239 | 30.77 | 1200 | 0.3541 | 0.8613 | 0.8613 |
| 0.2 | 35.9 | 1400 | 0.3759 | 0.8659 | 0.8662 |
| 0.1678 | 41.03 | 1600 | 0.3881 | 0.8673 | 0.8679 |
| 0.1413 | 46.15 | 1800 | 0.3860 | 0.8825 | 0.8825 |
| 0.1263 | 51.28 | 2000 | 0.4183 | 0.8646 | 0.8646 |
| 0.1148 | 56.41 | 2200 | 0.4556 | 0.8744 | 0.8744 |
| 0.1012 | 61.54 | 2400 | 0.4719 | 0.8726 | 0.8728 |
| 0.0869 | 66.67 | 2600 | 0.4991 | 0.8711 | 0.8711 |
| 0.084 | 71.79 | 2800 | 0.4861 | 0.8662 | 0.8662 |
| 0.0738 | 76.92 | 3000 | 0.5301 | 0.8646 | 0.8646 |
| 0.0663 | 82.05 | 3200 | 0.5858 | 0.8613 | 0.8613 |
| 0.0583 | 87.18 | 3400 | 0.5728 | 0.8711 | 0.8711 |
| 0.0562 | 92.31 | 3600 | 0.6076 | 0.8581 | 0.8581 |
| 0.0528 | 97.44 | 3800 | 0.6321 | 0.8530 | 0.8532 |
| 0.0523 | 102.56 | 4000 | 0.5966 | 0.8711 | 0.8711 |
| 0.0479 | 107.69 | 4200 | 0.6452 | 0.8613 | 0.8613 |
| 0.045 | 112.82 | 4400 | 0.6428 | 0.8679 | 0.8679 |
| 0.0413 | 117.95 | 4600 | 0.6647 | 0.8646 | 0.8646 |
| 0.0365 | 123.08 | 4800 | 0.7053 | 0.8678 | 0.8679 |
| 0.0336 | 128.21 | 5000 | 0.6581 | 0.8613 | 0.8613 |
| 0.0379 | 133.33 | 5200 | 0.6563 | 0.8629 | 0.8630 |
| 0.0337 | 138.46 | 5400 | 0.7398 | 0.8727 | 0.8728 |
| 0.0305 | 143.59 | 5600 | 0.7038 | 0.8662 | 0.8662 |
| 0.0295 | 148.72 | 5800 | 0.7416 | 0.8597 | 0.8597 |
| 0.0283 | 153.85 | 6000 | 0.6883 | 0.8743 | 0.8744 |
| 0.0284 | 158.97 | 6200 | 0.7029 | 0.8597 | 0.8597 |
| 0.0265 | 164.1 | 6400 | 0.7307 | 0.8695 | 0.8695 |
| 0.0261 | 169.23 | 6600 | 0.7407 | 0.8662 | 0.8662 |
| 0.0244 | 174.36 | 6800 | 0.7216 | 0.8662 | 0.8662 |
| 0.0265 | 179.49 | 7000 | 0.7617 | 0.8678 | 0.8679 |
| 0.0256 | 184.62 | 7200 | 0.7399 | 0.8597 | 0.8597 |
| 0.0219 | 189.74 | 7400 | 0.7639 | 0.8678 | 0.8679 |
| 0.0222 | 194.87 | 7600 | 0.7843 | 0.8564 | 0.8564 |
| 0.02 | 200.0 | 7800 | 0.8089 | 0.8678 | 0.8679 |
| 0.0207 | 205.13 | 8000 | 0.8057 | 0.8677 | 0.8679 |
| 0.0202 | 210.26 | 8200 | 0.7836 | 0.8662 | 0.8662 |
| 0.0199 | 215.38 | 8400 | 0.8125 | 0.8662 | 0.8662 |
| 0.0201 | 220.51 | 8600 | 0.7718 | 0.8711 | 0.8711 |
| 0.0192 | 225.64 | 8800 | 0.7955 | 0.8646 | 0.8646 |
| 0.0189 | 230.77 | 9000 | 0.7855 | 0.8695 | 0.8695 |
| 0.0173 | 235.9 | 9200 | 0.8047 | 0.8695 | 0.8695 |
| 0.0188 | 241.03 | 9400 | 0.8125 | 0.8613 | 0.8613 |
| 0.0173 | 246.15 | 9600 | 0.8071 | 0.8744 | 0.8744 |
| 0.018 | 251.28 | 9800 | 0.8009 | 0.8630 | 0.8630 |
| 0.0183 | 256.41 | 10000 | 0.8004 | 0.8646 | 0.8646 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:37:29+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_16384\_512\_56M-L32\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4512
* F1 Score: 0.8564
* Accuracy: 0.8564
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | lunarsylph/stablecell_v54 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:37:34+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA7
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9521 | 0.09 | 10 | 0.2719 |
| 0.2 | 0.18 | 20 | 0.1669 |
| 0.1531 | 0.27 | 30 | 0.1638 |
| 0.158 | 0.36 | 40 | 0.1538 |
| 0.1501 | 0.45 | 50 | 0.1480 |
| 0.1498 | 0.54 | 60 | 0.1474 |
| 0.1489 | 0.63 | 70 | 0.1469 |
| 0.1485 | 0.73 | 80 | 0.1517 |
| 0.146 | 0.82 | 90 | 0.1484 |
| 0.1477 | 0.91 | 100 | 0.1477 |
| 0.1504 | 1.0 | 110 | 0.1438 |
| 0.1562 | 1.09 | 120 | 0.2446 |
| 0.1499 | 1.18 | 130 | 0.1353 |
| 0.1159 | 1.27 | 140 | 0.0980 |
| 0.1368 | 1.36 | 150 | 0.0808 |
| 0.0737 | 1.45 | 160 | 0.0783 |
| 0.0718 | 1.54 | 170 | 0.0709 |
| 0.0725 | 1.63 | 180 | 0.0753 |
| 0.0733 | 1.72 | 190 | 0.0688 |
| 0.07 | 1.81 | 200 | 0.0640 |
| 0.069 | 1.9 | 210 | 0.0669 |
| 0.0648 | 1.99 | 220 | 0.0546 |
| 0.0578 | 2.08 | 230 | 0.0551 |
| 0.051 | 2.18 | 240 | 0.0541 |
| 0.0527 | 2.27 | 250 | 0.0547 |
| 0.059 | 2.36 | 260 | 0.0551 |
| 0.0518 | 2.45 | 270 | 0.0534 |
| 0.0503 | 2.54 | 280 | 0.0534 |
| 0.053 | 2.63 | 290 | 0.0548 |
| 0.0548 | 2.72 | 300 | 0.0529 |
| 0.0543 | 2.81 | 310 | 0.0527 |
| 0.0544 | 2.9 | 320 | 0.0529 |
| 0.058 | 2.99 | 330 | 0.0529 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA7", "results": []}]} | Litzy619/O0428HMA7 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T00:37:55+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA7
=========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0529
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2091
- F1 Score: 0.9132
- Accuracy: 0.9132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3563 | 0.54 | 200 | 0.2762 | 0.8925 | 0.8926 |
| 0.2803 | 1.08 | 400 | 0.2517 | 0.9022 | 0.9022 |
| 0.2616 | 1.62 | 600 | 0.2373 | 0.9054 | 0.9054 |
| 0.2456 | 2.16 | 800 | 0.2302 | 0.9093 | 0.9093 |
| 0.2412 | 2.7 | 1000 | 0.2258 | 0.9100 | 0.9100 |
| 0.2325 | 3.24 | 1200 | 0.2266 | 0.9110 | 0.9110 |
| 0.2341 | 3.78 | 1400 | 0.2206 | 0.9118 | 0.9118 |
| 0.2298 | 4.32 | 1600 | 0.2176 | 0.9152 | 0.9152 |
| 0.2209 | 4.86 | 1800 | 0.2198 | 0.9154 | 0.9154 |
| 0.2185 | 5.41 | 2000 | 0.2152 | 0.9171 | 0.9171 |
| 0.2189 | 5.95 | 2200 | 0.2122 | 0.9174 | 0.9174 |
| 0.2168 | 6.49 | 2400 | 0.2082 | 0.9211 | 0.9211 |
| 0.2134 | 7.03 | 2600 | 0.2080 | 0.9190 | 0.9191 |
| 0.2111 | 7.57 | 2800 | 0.2069 | 0.9197 | 0.9198 |
| 0.2075 | 8.11 | 3000 | 0.2058 | 0.9219 | 0.9220 |
| 0.2055 | 8.65 | 3200 | 0.2022 | 0.9191 | 0.9191 |
| 0.2002 | 9.19 | 3400 | 0.2019 | 0.9206 | 0.9206 |
| 0.2015 | 9.73 | 3600 | 0.2016 | 0.9206 | 0.9206 |
| 0.2071 | 10.27 | 3800 | 0.1973 | 0.9230 | 0.9230 |
| 0.1983 | 10.81 | 4000 | 0.1962 | 0.9233 | 0.9233 |
| 0.1981 | 11.35 | 4200 | 0.1947 | 0.9245 | 0.9245 |
| 0.196 | 11.89 | 4400 | 0.1983 | 0.9235 | 0.9235 |
| 0.1982 | 12.43 | 4600 | 0.1925 | 0.9267 | 0.9267 |
| 0.1954 | 12.97 | 4800 | 0.1928 | 0.9250 | 0.925 |
| 0.1929 | 13.51 | 5000 | 0.1959 | 0.9233 | 0.9233 |
| 0.1954 | 14.05 | 5200 | 0.1932 | 0.9221 | 0.9221 |
| 0.1933 | 14.59 | 5400 | 0.1936 | 0.9250 | 0.925 |
| 0.1903 | 15.14 | 5600 | 0.1924 | 0.9243 | 0.9243 |
| 0.1889 | 15.68 | 5800 | 0.1944 | 0.9206 | 0.9206 |
| 0.1921 | 16.22 | 6000 | 0.1923 | 0.9257 | 0.9257 |
| 0.1873 | 16.76 | 6200 | 0.1923 | 0.9245 | 0.9245 |
| 0.188 | 17.3 | 6400 | 0.1922 | 0.9238 | 0.9238 |
| 0.1941 | 17.84 | 6600 | 0.1891 | 0.9253 | 0.9253 |
| 0.1843 | 18.38 | 6800 | 0.1904 | 0.9233 | 0.9233 |
| 0.1892 | 18.92 | 7000 | 0.1901 | 0.9238 | 0.9238 |
| 0.1882 | 19.46 | 7200 | 0.1906 | 0.9248 | 0.9248 |
| 0.1843 | 20.0 | 7400 | 0.1893 | 0.9275 | 0.9275 |
| 0.19 | 20.54 | 7600 | 0.1887 | 0.9248 | 0.9248 |
| 0.1869 | 21.08 | 7800 | 0.1890 | 0.9255 | 0.9255 |
| 0.1854 | 21.62 | 8000 | 0.1896 | 0.9220 | 0.9220 |
| 0.1843 | 22.16 | 8200 | 0.1895 | 0.9247 | 0.9247 |
| 0.1819 | 22.7 | 8400 | 0.1889 | 0.9265 | 0.9265 |
| 0.1864 | 23.24 | 8600 | 0.1886 | 0.9250 | 0.925 |
| 0.1842 | 23.78 | 8800 | 0.1892 | 0.9230 | 0.9230 |
| 0.1866 | 24.32 | 9000 | 0.1894 | 0.9235 | 0.9235 |
| 0.1795 | 24.86 | 9200 | 0.1899 | 0.9230 | 0.9230 |
| 0.1826 | 25.41 | 9400 | 0.1890 | 0.9253 | 0.9253 |
| 0.1843 | 25.95 | 9600 | 0.1889 | 0.9243 | 0.9243 |
| 0.1801 | 26.49 | 9800 | 0.1889 | 0.9257 | 0.9257 |
| 0.1849 | 27.03 | 10000 | 0.1888 | 0.9257 | 0.9257 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:38:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_16384\_512\_56M-L1\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2091
* F1 Score: 0.9132
* Accuracy: 0.9132
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1930
- F1 Score: 0.9252
- Accuracy: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.33 | 0.54 | 200 | 0.2476 | 0.9046 | 0.9046 |
| 0.2507 | 1.08 | 400 | 0.2311 | 0.9098 | 0.9098 |
| 0.2376 | 1.62 | 600 | 0.2194 | 0.9130 | 0.9130 |
| 0.2227 | 2.16 | 800 | 0.2200 | 0.9114 | 0.9115 |
| 0.2177 | 2.7 | 1000 | 0.2133 | 0.9121 | 0.9122 |
| 0.2092 | 3.24 | 1200 | 0.2129 | 0.9146 | 0.9147 |
| 0.2093 | 3.78 | 1400 | 0.1997 | 0.9208 | 0.9208 |
| 0.2032 | 4.32 | 1600 | 0.1924 | 0.9230 | 0.9230 |
| 0.1952 | 4.86 | 1800 | 0.1918 | 0.9215 | 0.9215 |
| 0.1903 | 5.41 | 2000 | 0.1913 | 0.9275 | 0.9275 |
| 0.1917 | 5.95 | 2200 | 0.1853 | 0.9243 | 0.9243 |
| 0.1878 | 6.49 | 2400 | 0.1857 | 0.9218 | 0.9218 |
| 0.1861 | 7.03 | 2600 | 0.1842 | 0.9246 | 0.9247 |
| 0.1807 | 7.57 | 2800 | 0.1824 | 0.9213 | 0.9213 |
| 0.1805 | 8.11 | 3000 | 0.1827 | 0.9242 | 0.9242 |
| 0.1768 | 8.65 | 3200 | 0.1837 | 0.9233 | 0.9233 |
| 0.1708 | 9.19 | 3400 | 0.1813 | 0.9253 | 0.9253 |
| 0.1737 | 9.73 | 3600 | 0.1744 | 0.9287 | 0.9287 |
| 0.1787 | 10.27 | 3800 | 0.1800 | 0.9243 | 0.9243 |
| 0.1706 | 10.81 | 4000 | 0.1772 | 0.9285 | 0.9285 |
| 0.1696 | 11.35 | 4200 | 0.1752 | 0.9297 | 0.9297 |
| 0.167 | 11.89 | 4400 | 0.1786 | 0.9277 | 0.9277 |
| 0.1698 | 12.43 | 4600 | 0.1724 | 0.9275 | 0.9275 |
| 0.1647 | 12.97 | 4800 | 0.1709 | 0.9289 | 0.9289 |
| 0.1642 | 13.51 | 5000 | 0.1772 | 0.9263 | 0.9264 |
| 0.166 | 14.05 | 5200 | 0.1813 | 0.9233 | 0.9233 |
| 0.1625 | 14.59 | 5400 | 0.1725 | 0.9302 | 0.9302 |
| 0.1619 | 15.14 | 5600 | 0.1729 | 0.9287 | 0.9287 |
| 0.1582 | 15.68 | 5800 | 0.1734 | 0.9290 | 0.9291 |
| 0.16 | 16.22 | 6000 | 0.1723 | 0.9284 | 0.9284 |
| 0.1567 | 16.76 | 6200 | 0.1768 | 0.9285 | 0.9285 |
| 0.1561 | 17.3 | 6400 | 0.1766 | 0.9258 | 0.9258 |
| 0.1611 | 17.84 | 6600 | 0.1708 | 0.9292 | 0.9292 |
| 0.1524 | 18.38 | 6800 | 0.1731 | 0.9275 | 0.9275 |
| 0.1561 | 18.92 | 7000 | 0.1726 | 0.9287 | 0.9287 |
| 0.1546 | 19.46 | 7200 | 0.1745 | 0.9304 | 0.9304 |
| 0.152 | 20.0 | 7400 | 0.1722 | 0.9311 | 0.9311 |
| 0.1566 | 20.54 | 7600 | 0.1760 | 0.9280 | 0.9280 |
| 0.1519 | 21.08 | 7800 | 0.1702 | 0.9307 | 0.9307 |
| 0.149 | 21.62 | 8000 | 0.1734 | 0.9297 | 0.9297 |
| 0.1501 | 22.16 | 8200 | 0.1734 | 0.9307 | 0.9307 |
| 0.1475 | 22.7 | 8400 | 0.1731 | 0.9292 | 0.9292 |
| 0.15 | 23.24 | 8600 | 0.1741 | 0.9294 | 0.9294 |
| 0.1491 | 23.78 | 8800 | 0.1729 | 0.9296 | 0.9296 |
| 0.1501 | 24.32 | 9000 | 0.1748 | 0.9277 | 0.9277 |
| 0.1447 | 24.86 | 9200 | 0.1748 | 0.9287 | 0.9287 |
| 0.1487 | 25.41 | 9400 | 0.1735 | 0.9289 | 0.9289 |
| 0.1483 | 25.95 | 9600 | 0.1741 | 0.9287 | 0.9287 |
| 0.1444 | 26.49 | 9800 | 0.1731 | 0.9294 | 0.9294 |
| 0.1486 | 27.03 | 10000 | 0.1734 | 0.9297 | 0.9297 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:39:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_16384\_512\_56M-L8\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1930
* F1 Score: 0.9252
* Accuracy: 0.9252
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1966
- F1 Score: 0.9270
- Accuracy: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3053 | 0.54 | 200 | 0.2514 | 0.8983 | 0.8983 |
| 0.2346 | 1.08 | 400 | 0.2262 | 0.9123 | 0.9123 |
| 0.2215 | 1.62 | 600 | 0.2038 | 0.9181 | 0.9181 |
| 0.2065 | 2.16 | 800 | 0.1970 | 0.9233 | 0.9233 |
| 0.2019 | 2.7 | 1000 | 0.1946 | 0.9214 | 0.9215 |
| 0.1924 | 3.24 | 1200 | 0.1895 | 0.9204 | 0.9204 |
| 0.1907 | 3.78 | 1400 | 0.1814 | 0.9245 | 0.9245 |
| 0.1864 | 4.32 | 1600 | 0.1811 | 0.9262 | 0.9262 |
| 0.1793 | 4.86 | 1800 | 0.1760 | 0.9248 | 0.9248 |
| 0.1718 | 5.41 | 2000 | 0.1800 | 0.9255 | 0.9255 |
| 0.1754 | 5.95 | 2200 | 0.1728 | 0.9272 | 0.9272 |
| 0.1683 | 6.49 | 2400 | 0.1942 | 0.9157 | 0.9159 |
| 0.1679 | 7.03 | 2600 | 0.1749 | 0.9297 | 0.9297 |
| 0.1603 | 7.57 | 2800 | 0.1737 | 0.9279 | 0.9279 |
| 0.1593 | 8.11 | 3000 | 0.1750 | 0.9282 | 0.9282 |
| 0.1554 | 8.65 | 3200 | 0.1774 | 0.9270 | 0.9270 |
| 0.147 | 9.19 | 3400 | 0.1869 | 0.9240 | 0.9240 |
| 0.1498 | 9.73 | 3600 | 0.1674 | 0.9328 | 0.9328 |
| 0.1508 | 10.27 | 3800 | 0.1758 | 0.9319 | 0.9319 |
| 0.1444 | 10.81 | 4000 | 0.1699 | 0.9331 | 0.9331 |
| 0.1394 | 11.35 | 4200 | 0.1742 | 0.9328 | 0.9328 |
| 0.1383 | 11.89 | 4400 | 0.1704 | 0.9355 | 0.9355 |
| 0.1378 | 12.43 | 4600 | 0.1652 | 0.9333 | 0.9333 |
| 0.1315 | 12.97 | 4800 | 0.1675 | 0.9311 | 0.9311 |
| 0.13 | 13.51 | 5000 | 0.1795 | 0.9270 | 0.9270 |
| 0.1293 | 14.05 | 5200 | 0.1823 | 0.9256 | 0.9257 |
| 0.1263 | 14.59 | 5400 | 0.1675 | 0.9355 | 0.9355 |
| 0.1257 | 15.14 | 5600 | 0.1659 | 0.9343 | 0.9343 |
| 0.1215 | 15.68 | 5800 | 0.1731 | 0.9309 | 0.9309 |
| 0.1209 | 16.22 | 6000 | 0.1711 | 0.9328 | 0.9328 |
| 0.1176 | 16.76 | 6200 | 0.1816 | 0.9329 | 0.9329 |
| 0.1171 | 17.3 | 6400 | 0.1816 | 0.9290 | 0.9291 |
| 0.1182 | 17.84 | 6600 | 0.1686 | 0.9353 | 0.9353 |
| 0.11 | 18.38 | 6800 | 0.1765 | 0.9351 | 0.9351 |
| 0.115 | 18.92 | 7000 | 0.1719 | 0.9338 | 0.9338 |
| 0.1135 | 19.46 | 7200 | 0.1767 | 0.9339 | 0.9340 |
| 0.1086 | 20.0 | 7400 | 0.1744 | 0.9338 | 0.9338 |
| 0.1089 | 20.54 | 7600 | 0.1842 | 0.9341 | 0.9341 |
| 0.1091 | 21.08 | 7800 | 0.1731 | 0.9363 | 0.9363 |
| 0.1025 | 21.62 | 8000 | 0.1787 | 0.9348 | 0.9348 |
| 0.1054 | 22.16 | 8200 | 0.1768 | 0.9323 | 0.9323 |
| 0.1031 | 22.7 | 8400 | 0.1791 | 0.9309 | 0.9309 |
| 0.103 | 23.24 | 8600 | 0.1821 | 0.9336 | 0.9336 |
| 0.1019 | 23.78 | 8800 | 0.1793 | 0.9338 | 0.9338 |
| 0.1031 | 24.32 | 9000 | 0.1813 | 0.9321 | 0.9321 |
| 0.0967 | 24.86 | 9200 | 0.1810 | 0.9336 | 0.9336 |
| 0.0983 | 25.41 | 9400 | 0.1822 | 0.9333 | 0.9333 |
| 0.0993 | 25.95 | 9600 | 0.1833 | 0.9336 | 0.9336 |
| 0.0964 | 26.49 | 9800 | 0.1816 | 0.9348 | 0.9348 |
| 0.0984 | 27.03 | 10000 | 0.1821 | 0.9328 | 0.9328 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:39:53+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_16384\_512\_56M-L32\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1966
* F1 Score: 0.9270
* Accuracy: 0.9270
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw2.25-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:40:35+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| [<img src="URL width="200"/>](URL)
Llama-3 8B Gradient Instruct 1048k
==================================
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@URL.
For more info see our End-to-end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
!image/png
Approach:
* meta-llama/Meta-Llama-3-8B-Instruct as the base
* NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
* Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data:
For training data, we generate long contexts by augmenting SlimPajama.
Progressive Training Details:
Quants:
* GGUF
* MLX-4bit
The Gradient AI Team
--------------------
URL
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
----------
Drop an email to contact@URL
References
----------
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] URL
---
Base Model
==========
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.
#### Transformers pipeline
#### Transformers AutoModelForCausalLM
### Use with 'llama3'
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | andersonbcdefg/tiny-emb-2024-04-30_00-41-58 | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:41:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:41:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Llama-3-8B-Dolphin-Portuguese-v0.2
Model Trained on a translated version of dolphin dataset.
## Usage
```python
import transformers
import torch
model_id = "adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "Você é um robô pirata que sempre responde como um pirata deveria!"},
{"role": "user", "content": "Quem é você?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
| {"language": ["pt"], "library_name": "transformers", "datasets": ["adalbertojunior/dolphin_pt_test"]} | adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"pt",
"dataset:adalbertojunior/dolphin_pt_test",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:42:45+00:00 | [] | [
"pt"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #pt #dataset-adalbertojunior/dolphin_pt_test #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Llama-3-8B-Dolphin-Portuguese-v0.2
Model Trained on a translated version of dolphin dataset.
## Usage
| [
"# Model Card for Llama-3-8B-Dolphin-Portuguese-v0.2\n\nModel Trained on a translated version of dolphin dataset.",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #pt #dataset-adalbertojunior/dolphin_pt_test #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Llama-3-8B-Dolphin-Portuguese-v0.2\n\nModel Trained on a translated version of dolphin dataset.",
"## Usage"
] |
null | peft |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: totally-not-an-llm/ZorgonChat
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: false
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# qlora-out
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the [ZorgonChat](https://huggingface.co/datasets/totally-not-an-llm/ZorgonChat) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3466
## Model description
Trained on a dataset of "alien language" chats to see if it will learn to talk in english. Prompt format is:
```
You are a helpful assistant., respond in Language: English
### Instruction:
{prompt}
### Response:
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9295 | 0.03 | 1 | 3.9073 |
| 3.5364 | 0.25 | 8 | 3.6199 |
| 3.263 | 0.5 | 16 | 3.1821 |
| 2.798 | 0.75 | 24 | 2.8962 |
| 2.7787 | 1.0 | 32 | 2.6773 |
| 2.5959 | 1.25 | 40 | 2.5506 |
| 2.4793 | 1.5 | 48 | 2.4955 |
| 2.5221 | 1.75 | 56 | 2.4613 |
| 2.4384 | 2.0 | 64 | 2.4055 |
| 2.295 | 2.25 | 72 | 2.3923 |
| 2.3943 | 2.5 | 80 | 2.3862 |
| 2.2398 | 2.75 | 88 | 2.3605 |
| 2.2693 | 3.0 | 96 | 2.3526 |
| 2.425 | 3.25 | 104 | 2.3471 |
| 2.2857 | 3.5 | 112 | 2.3468 |
| 2.2448 | 3.75 | 120 | 2.3451 |
| 2.1836 | 4.0 | 128 | 2.3466 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["totally-not-an-llm/ZorgonChat"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "model-index": [{"name": "qlora-out", "results": []}]} | totally-not-an-llm/babelfish-1.1b-v0.1-qlora | null | [
"peft",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:totally-not-an-llm/ZorgonChat",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2024-04-30T00:42:53+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #llama #generated_from_trainer #dataset-totally-not-an-llm/ZorgonChat #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #4-bit #region-us
| <img src="URL alt="Built with Axolotl" width="200" height="32"/>
See axolotl config
axolotl version: '0.4.0'
qlora-out
=========
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T on the ZorgonChat dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3466
Model description
-----------------
Trained on a dataset of "alien language" chats to see if it will learn to talk in english. Prompt format is:
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 10
* num\_epochs: 4
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0.dev0
* Pytorch 2.1.2+cu118
* Datasets 2.15.0
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.2+cu118\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#peft #tensorboard #safetensors #llama #generated_from_trainer #dataset-totally-not-an-llm/ZorgonChat #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #4-bit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.2+cu118\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4896
- F1 Score: 0.7661
- Accuracy: 0.7649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5749 | 0.97 | 200 | 0.5474 | 0.7322 | 0.7304 |
| 0.5382 | 1.93 | 400 | 0.5217 | 0.7522 | 0.7507 |
| 0.5237 | 2.9 | 600 | 0.5640 | 0.7226 | 0.7225 |
| 0.5204 | 3.86 | 800 | 0.5064 | 0.7581 | 0.7567 |
| 0.5135 | 4.83 | 1000 | 0.5549 | 0.7360 | 0.7356 |
| 0.5112 | 5.8 | 1200 | 0.5327 | 0.7480 | 0.7467 |
| 0.5061 | 6.76 | 1400 | 0.5123 | 0.7550 | 0.7534 |
| 0.5028 | 7.73 | 1600 | 0.5116 | 0.7575 | 0.7558 |
| 0.4998 | 8.7 | 1800 | 0.5150 | 0.7573 | 0.7558 |
| 0.4946 | 9.66 | 2000 | 0.5178 | 0.7528 | 0.7513 |
| 0.5002 | 10.63 | 2200 | 0.5350 | 0.7513 | 0.7501 |
| 0.4906 | 11.59 | 2400 | 0.4972 | 0.7583 | 0.7567 |
| 0.4946 | 12.56 | 2600 | 0.5230 | 0.7587 | 0.7573 |
| 0.4907 | 13.53 | 2800 | 0.5104 | 0.7575 | 0.7561 |
| 0.4911 | 14.49 | 3000 | 0.5158 | 0.7569 | 0.7555 |
| 0.4831 | 15.46 | 3200 | 0.4969 | 0.7626 | 0.7610 |
| 0.4857 | 16.43 | 3400 | 0.5169 | 0.7574 | 0.7561 |
| 0.4859 | 17.39 | 3600 | 0.4996 | 0.7653 | 0.7637 |
| 0.4833 | 18.36 | 3800 | 0.4888 | 0.7641 | 0.7631 |
| 0.4802 | 19.32 | 4000 | 0.4941 | 0.7656 | 0.7640 |
| 0.4785 | 20.29 | 4200 | 0.4986 | 0.7650 | 0.7634 |
| 0.4812 | 21.26 | 4400 | 0.5068 | 0.7652 | 0.7637 |
| 0.4808 | 22.22 | 4600 | 0.5135 | 0.7595 | 0.7582 |
| 0.478 | 23.19 | 4800 | 0.4918 | 0.7674 | 0.7658 |
| 0.4783 | 24.15 | 5000 | 0.4966 | 0.7653 | 0.7637 |
| 0.476 | 25.12 | 5200 | 0.5129 | 0.7616 | 0.7604 |
| 0.4731 | 26.09 | 5400 | 0.4968 | 0.7650 | 0.7634 |
| 0.4764 | 27.05 | 5600 | 0.5075 | 0.7635 | 0.7622 |
| 0.4742 | 28.02 | 5800 | 0.4884 | 0.7677 | 0.7661 |
| 0.4741 | 28.99 | 6000 | 0.4955 | 0.7659 | 0.7643 |
| 0.4719 | 29.95 | 6200 | 0.5058 | 0.7634 | 0.7619 |
| 0.4721 | 30.92 | 6400 | 0.4844 | 0.7669 | 0.7658 |
| 0.4725 | 31.88 | 6600 | 0.4910 | 0.7674 | 0.7658 |
| 0.4704 | 32.85 | 6800 | 0.5001 | 0.7683 | 0.7667 |
| 0.4715 | 33.82 | 7000 | 0.5007 | 0.7656 | 0.7640 |
| 0.4692 | 34.78 | 7200 | 0.5045 | 0.7658 | 0.7643 |
| 0.4727 | 35.75 | 7400 | 0.4939 | 0.7710 | 0.7694 |
| 0.4661 | 36.71 | 7600 | 0.4955 | 0.7689 | 0.7673 |
| 0.468 | 37.68 | 7800 | 0.4953 | 0.7683 | 0.7667 |
| 0.4672 | 38.65 | 8000 | 0.4892 | 0.7713 | 0.7697 |
| 0.473 | 39.61 | 8200 | 0.4912 | 0.7704 | 0.7688 |
| 0.4612 | 40.58 | 8400 | 0.4977 | 0.7680 | 0.7664 |
| 0.467 | 41.55 | 8600 | 0.4906 | 0.7674 | 0.7658 |
| 0.464 | 42.51 | 8800 | 0.4949 | 0.7680 | 0.7664 |
| 0.4663 | 43.48 | 9000 | 0.5025 | 0.7674 | 0.7658 |
| 0.4666 | 44.44 | 9200 | 0.4970 | 0.7683 | 0.7667 |
| 0.4675 | 45.41 | 9400 | 0.4949 | 0.7677 | 0.7661 |
| 0.4602 | 46.38 | 9600 | 0.4976 | 0.7677 | 0.7661 |
| 0.4685 | 47.34 | 9800 | 0.4955 | 0.7683 | 0.7667 |
| 0.4635 | 48.31 | 10000 | 0.4938 | 0.7692 | 0.7676 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:42:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_16384\_512\_56M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4896
* F1 Score: 0.7661
* Accuracy: 0.7649
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4937
- F1 Score: 0.7680
- Accuracy: 0.7673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5645 | 0.97 | 200 | 0.5285 | 0.7485 | 0.7467 |
| 0.523 | 1.93 | 400 | 0.5110 | 0.7599 | 0.7582 |
| 0.5069 | 2.9 | 600 | 0.5345 | 0.7392 | 0.7383 |
| 0.5028 | 3.86 | 800 | 0.4977 | 0.7622 | 0.7607 |
| 0.4953 | 4.83 | 1000 | 0.5353 | 0.7493 | 0.7486 |
| 0.4921 | 5.8 | 1200 | 0.5099 | 0.7626 | 0.7613 |
| 0.4847 | 6.76 | 1400 | 0.4910 | 0.7671 | 0.7655 |
| 0.4809 | 7.73 | 1600 | 0.5009 | 0.7665 | 0.7652 |
| 0.4766 | 8.7 | 1800 | 0.4997 | 0.7635 | 0.7622 |
| 0.4696 | 9.66 | 2000 | 0.4951 | 0.7661 | 0.7646 |
| 0.4749 | 10.63 | 2200 | 0.4946 | 0.7764 | 0.7749 |
| 0.4637 | 11.59 | 2400 | 0.4844 | 0.7693 | 0.7679 |
| 0.4677 | 12.56 | 2600 | 0.4975 | 0.7691 | 0.7676 |
| 0.4625 | 13.53 | 2800 | 0.4971 | 0.7638 | 0.7622 |
| 0.4601 | 14.49 | 3000 | 0.4935 | 0.7677 | 0.7661 |
| 0.4488 | 15.46 | 3200 | 0.4850 | 0.7744 | 0.7731 |
| 0.4541 | 16.43 | 3400 | 0.4982 | 0.7639 | 0.7625 |
| 0.4513 | 17.39 | 3600 | 0.4855 | 0.7683 | 0.7667 |
| 0.4455 | 18.36 | 3800 | 0.4811 | 0.7753 | 0.7746 |
| 0.4434 | 19.32 | 4000 | 0.4826 | 0.7777 | 0.7764 |
| 0.4408 | 20.29 | 4200 | 0.4893 | 0.7717 | 0.7703 |
| 0.4417 | 21.26 | 4400 | 0.4943 | 0.7647 | 0.7631 |
| 0.4406 | 22.22 | 4600 | 0.5049 | 0.7643 | 0.7628 |
| 0.4358 | 23.19 | 4800 | 0.4893 | 0.7740 | 0.7725 |
| 0.4362 | 24.15 | 5000 | 0.5022 | 0.7626 | 0.7610 |
| 0.4312 | 25.12 | 5200 | 0.5127 | 0.7617 | 0.7604 |
| 0.4311 | 26.09 | 5400 | 0.4949 | 0.7662 | 0.7646 |
| 0.4298 | 27.05 | 5600 | 0.5092 | 0.7630 | 0.7616 |
| 0.4258 | 28.02 | 5800 | 0.4939 | 0.7695 | 0.7679 |
| 0.4255 | 28.99 | 6000 | 0.4894 | 0.7704 | 0.7688 |
| 0.4219 | 29.95 | 6200 | 0.4991 | 0.7632 | 0.7616 |
| 0.4209 | 30.92 | 6400 | 0.4876 | 0.7719 | 0.7716 |
| 0.4194 | 31.88 | 6600 | 0.4868 | 0.7740 | 0.7728 |
| 0.4169 | 32.85 | 6800 | 0.5065 | 0.7650 | 0.7634 |
| 0.4187 | 33.82 | 7000 | 0.5070 | 0.7661 | 0.7646 |
| 0.4146 | 34.78 | 7200 | 0.5037 | 0.7667 | 0.7652 |
| 0.4162 | 35.75 | 7400 | 0.4985 | 0.7722 | 0.7707 |
| 0.4086 | 36.71 | 7600 | 0.5023 | 0.7680 | 0.7664 |
| 0.4102 | 37.68 | 7800 | 0.4987 | 0.7689 | 0.7673 |
| 0.409 | 38.65 | 8000 | 0.4975 | 0.7662 | 0.7646 |
| 0.4145 | 39.61 | 8200 | 0.4959 | 0.7685 | 0.7670 |
| 0.4012 | 40.58 | 8400 | 0.5114 | 0.7641 | 0.7625 |
| 0.4083 | 41.55 | 8600 | 0.4976 | 0.7674 | 0.7658 |
| 0.4056 | 42.51 | 8800 | 0.5003 | 0.7692 | 0.7676 |
| 0.4047 | 43.48 | 9000 | 0.5074 | 0.7689 | 0.7673 |
| 0.4057 | 44.44 | 9200 | 0.5049 | 0.7680 | 0.7664 |
| 0.4077 | 45.41 | 9400 | 0.5039 | 0.7677 | 0.7661 |
| 0.3955 | 46.38 | 9600 | 0.5081 | 0.7674 | 0.7658 |
| 0.4058 | 47.34 | 9800 | 0.5033 | 0.7686 | 0.7670 |
| 0.4012 | 48.31 | 10000 | 0.5021 | 0.7686 | 0.7670 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:43:24+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_16384\_512\_56M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4937
* F1 Score: 0.7680
* Accuracy: 0.7673
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4872
- F1 Score: 0.7698
- Accuracy: 0.7685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5581 | 0.97 | 200 | 0.5117 | 0.7566 | 0.7549 |
| 0.5124 | 1.93 | 400 | 0.5007 | 0.7650 | 0.7634 |
| 0.4956 | 2.9 | 600 | 0.5127 | 0.7551 | 0.7537 |
| 0.4895 | 3.86 | 800 | 0.4880 | 0.7700 | 0.7685 |
| 0.482 | 4.83 | 1000 | 0.5241 | 0.7599 | 0.7589 |
| 0.4769 | 5.8 | 1200 | 0.4955 | 0.7712 | 0.7697 |
| 0.4677 | 6.76 | 1400 | 0.4860 | 0.7770 | 0.7755 |
| 0.4617 | 7.73 | 1600 | 0.4912 | 0.7661 | 0.7646 |
| 0.4529 | 8.7 | 1800 | 0.5037 | 0.7720 | 0.7707 |
| 0.445 | 9.66 | 2000 | 0.4949 | 0.7716 | 0.7700 |
| 0.4451 | 10.63 | 2200 | 0.4985 | 0.7734 | 0.7719 |
| 0.432 | 11.59 | 2400 | 0.4920 | 0.7746 | 0.7731 |
| 0.4303 | 12.56 | 2600 | 0.4882 | 0.7725 | 0.7713 |
| 0.4238 | 13.53 | 2800 | 0.4968 | 0.7683 | 0.7667 |
| 0.4162 | 14.49 | 3000 | 0.5082 | 0.7607 | 0.7592 |
| 0.4029 | 15.46 | 3200 | 0.5122 | 0.7737 | 0.7731 |
| 0.4029 | 16.43 | 3400 | 0.5077 | 0.7737 | 0.7722 |
| 0.3952 | 17.39 | 3600 | 0.4987 | 0.7718 | 0.7703 |
| 0.3852 | 18.36 | 3800 | 0.5061 | 0.7674 | 0.7661 |
| 0.3803 | 19.32 | 4000 | 0.5011 | 0.7749 | 0.7737 |
| 0.3764 | 20.29 | 4200 | 0.5178 | 0.7636 | 0.7631 |
| 0.3725 | 21.26 | 4400 | 0.5228 | 0.7700 | 0.7688 |
| 0.3656 | 22.22 | 4600 | 0.5321 | 0.7719 | 0.7703 |
| 0.3547 | 23.19 | 4800 | 0.5255 | 0.7625 | 0.7613 |
| 0.3563 | 24.15 | 5000 | 0.5375 | 0.7666 | 0.7652 |
| 0.3433 | 25.12 | 5200 | 0.5609 | 0.7674 | 0.7658 |
| 0.3397 | 26.09 | 5400 | 0.5365 | 0.7610 | 0.7595 |
| 0.3323 | 27.05 | 5600 | 0.5605 | 0.7689 | 0.7673 |
| 0.3263 | 28.02 | 5800 | 0.5687 | 0.7635 | 0.7619 |
| 0.3276 | 28.99 | 6000 | 0.5600 | 0.7679 | 0.7664 |
| 0.3173 | 29.95 | 6200 | 0.5517 | 0.7656 | 0.7643 |
| 0.3122 | 30.92 | 6400 | 0.5604 | 0.7628 | 0.7628 |
| 0.311 | 31.88 | 6600 | 0.5754 | 0.7604 | 0.7589 |
| 0.3055 | 32.85 | 6800 | 0.5759 | 0.7649 | 0.7634 |
| 0.3045 | 33.82 | 7000 | 0.5743 | 0.7680 | 0.7664 |
| 0.2972 | 34.78 | 7200 | 0.5662 | 0.7713 | 0.7697 |
| 0.2936 | 35.75 | 7400 | 0.5942 | 0.7692 | 0.7676 |
| 0.2899 | 36.71 | 7600 | 0.5824 | 0.7662 | 0.7649 |
| 0.2851 | 37.68 | 7800 | 0.6031 | 0.7613 | 0.7598 |
| 0.2816 | 38.65 | 8000 | 0.6097 | 0.7630 | 0.7616 |
| 0.2829 | 39.61 | 8200 | 0.5994 | 0.7632 | 0.7619 |
| 0.272 | 40.58 | 8400 | 0.6227 | 0.7605 | 0.7589 |
| 0.2729 | 41.55 | 8600 | 0.6197 | 0.7637 | 0.7622 |
| 0.271 | 42.51 | 8800 | 0.6207 | 0.7597 | 0.7585 |
| 0.2743 | 43.48 | 9000 | 0.6264 | 0.7638 | 0.7622 |
| 0.2639 | 44.44 | 9200 | 0.6307 | 0.7609 | 0.7595 |
| 0.269 | 45.41 | 9400 | 0.6214 | 0.7631 | 0.7616 |
| 0.2555 | 46.38 | 9600 | 0.6348 | 0.7636 | 0.7622 |
| 0.2658 | 47.34 | 9800 | 0.6264 | 0.7615 | 0.7601 |
| 0.2558 | 48.31 | 10000 | 0.6300 | 0.7618 | 0.7604 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:43:25+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_16384\_512\_56M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4872
* F1 Score: 0.7698
* Accuracy: 0.7685
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/vm71ht6 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:45:13+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
v0.2 version with better improved dolphin based dataset but only 150K for testing instead of the full 850K. Doesn't seem to work that well so I will need to add the rest of the dataset.
We are happy for anyone to try it out and give some feedback.
Training:
- 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
- Trained on a modified and improved version of Cognitive Computations Eric Hartford's Dolphin dataset. https://huggingface.co/datasets/cognitivecomputations/dolphin
- Training duration is around 1 day on 2x RTX3090 on our own machine, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
The goal for this model is to have the model less-censored and great at general tasks like the previous dolphin based models by Eric Hartford.
Instruct format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Quants:
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Axolotl Config:
```
base_model: /home/owen/models/Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
train_on_inputs: false
group_by_length: false
load_in_8bit: false
load_in_4bit: true
strict: false
sequence_len: 4096
bf16: true
fp16: false
tf32: false
flash_attention: true
# Data
datasets:
- path: /home/owen/datasets/cleaned-dolphin201-sharegpt2-uuid-improved.jsonl
type:
field_instruction: input
field_output: output
format: "<|start_header_id|>user<|end_header_id|>\n\n{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
no_input_format: "<|start_header_id|>user<|end_header_id|>\n\n{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
warmup_steps: 10
dataset_prepared_path: ./last_run_prepared
# Iterations
num_epochs: 1
saves_per_epoch: 4
# Evaluation
val_set_size: 0.01
eval_table_size:
eval_table_max_new_tokens:
eval_sample_packing: false
evals_per_epoch: 4
# LoRA
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
save_safetensors: true
# Sampling
sample_packing: true
pad_to_sequence_len: true
# Batching
gradient_accumulation_steps: 32
micro_batch_size: 2
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
# wandb
wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb
wandb_project: llama-3-8b-instruct-dolphin-q
wandb_entity: # A wandb Team name if using a Team
wandb_watch:
wandb_name: 64-128-4096-1ep-v0.2
wandb_run_id: # Set the ID of your wandb run
wandb_log_model: # "checkpoint" to log model to wandb Artifacts every `save_steps` or "end" to log only at the end of training
# Optimizer
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
# Misc
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
debug:
deepspeed: /home/owen/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
special_tokens:
pad_token: <|end_of_text|>
```
| {"license": "llama3"} | AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:45:29+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
URL
v0.2 version with better improved dolphin based dataset but only 150K for testing instead of the full 850K. Doesn't seem to work that well so I will need to add the rest of the dataset.
We are happy for anyone to try it out and give some feedback.
Training:
- 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
- Trained on a modified and improved version of Cognitive Computations Eric Hartford's Dolphin dataset. URL
- Training duration is around 1 day on 2x RTX3090 on our own machine, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
The goal for this model is to have the model less-censored and great at general tasks like the previous dolphin based models by Eric Hartford.
Instruct format:
Quants:
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
Axolotl Config:
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5972
- F1 Score: 0.6840
- Accuracy: 0.6872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6453 | 1.04 | 200 | 0.6341 | 0.5788 | 0.6383 |
| 0.6199 | 2.08 | 400 | 0.6253 | 0.6551 | 0.6536 |
| 0.6124 | 3.12 | 600 | 0.6121 | 0.6474 | 0.6663 |
| 0.6068 | 4.17 | 800 | 0.6168 | 0.6541 | 0.6543 |
| 0.603 | 5.21 | 1000 | 0.6082 | 0.6700 | 0.6771 |
| 0.5996 | 6.25 | 1200 | 0.6052 | 0.6664 | 0.6758 |
| 0.5948 | 7.29 | 1400 | 0.6092 | 0.6734 | 0.6738 |
| 0.5953 | 8.33 | 1600 | 0.6112 | 0.6708 | 0.6719 |
| 0.5924 | 9.38 | 1800 | 0.6172 | 0.6623 | 0.6601 |
| 0.5904 | 10.42 | 2000 | 0.6073 | 0.6727 | 0.6745 |
| 0.5926 | 11.46 | 2200 | 0.6028 | 0.6673 | 0.6777 |
| 0.5848 | 12.5 | 2400 | 0.6064 | 0.6717 | 0.6781 |
| 0.5835 | 13.54 | 2600 | 0.6065 | 0.6705 | 0.6751 |
| 0.5849 | 14.58 | 2800 | 0.6048 | 0.6680 | 0.6787 |
| 0.5878 | 15.62 | 3000 | 0.6038 | 0.6757 | 0.6813 |
| 0.5813 | 16.67 | 3200 | 0.6048 | 0.6738 | 0.6771 |
| 0.5816 | 17.71 | 3400 | 0.6032 | 0.6741 | 0.6800 |
| 0.579 | 18.75 | 3600 | 0.6046 | 0.6740 | 0.6843 |
| 0.5801 | 19.79 | 3800 | 0.6059 | 0.6749 | 0.6787 |
| 0.5743 | 20.83 | 4000 | 0.6045 | 0.6727 | 0.6764 |
| 0.5794 | 21.88 | 4200 | 0.6088 | 0.6678 | 0.6686 |
| 0.5773 | 22.92 | 4400 | 0.6064 | 0.6705 | 0.6738 |
| 0.5752 | 23.96 | 4600 | 0.6097 | 0.6666 | 0.6660 |
| 0.5729 | 25.0 | 4800 | 0.6044 | 0.6750 | 0.6807 |
| 0.5728 | 26.04 | 5000 | 0.6071 | 0.6740 | 0.6768 |
| 0.5737 | 27.08 | 5200 | 0.6037 | 0.6719 | 0.6748 |
| 0.5703 | 28.12 | 5400 | 0.6019 | 0.6711 | 0.6748 |
| 0.5693 | 29.17 | 5600 | 0.6054 | 0.6732 | 0.6781 |
| 0.5676 | 30.21 | 5800 | 0.6089 | 0.6742 | 0.6781 |
| 0.5676 | 31.25 | 6000 | 0.6081 | 0.6739 | 0.6755 |
| 0.5676 | 32.29 | 6200 | 0.6024 | 0.6710 | 0.6768 |
| 0.5691 | 33.33 | 6400 | 0.6063 | 0.6726 | 0.6751 |
| 0.5675 | 34.38 | 6600 | 0.6028 | 0.6770 | 0.6820 |
| 0.5651 | 35.42 | 6800 | 0.6033 | 0.6781 | 0.6813 |
| 0.5651 | 36.46 | 7000 | 0.6041 | 0.6730 | 0.6777 |
| 0.5629 | 37.5 | 7200 | 0.6079 | 0.6762 | 0.6784 |
| 0.5621 | 38.54 | 7400 | 0.6077 | 0.6743 | 0.6787 |
| 0.5692 | 39.58 | 7600 | 0.6039 | 0.6761 | 0.6820 |
| 0.5646 | 40.62 | 7800 | 0.6098 | 0.6734 | 0.6732 |
| 0.5648 | 41.67 | 8000 | 0.6045 | 0.6757 | 0.6777 |
| 0.562 | 42.71 | 8200 | 0.6058 | 0.6736 | 0.6790 |
| 0.5631 | 43.75 | 8400 | 0.6061 | 0.6742 | 0.6771 |
| 0.5625 | 44.79 | 8600 | 0.6063 | 0.6716 | 0.6738 |
| 0.5593 | 45.83 | 8800 | 0.6061 | 0.6735 | 0.6781 |
| 0.5636 | 46.88 | 9000 | 0.6087 | 0.6768 | 0.6777 |
| 0.5599 | 47.92 | 9200 | 0.6076 | 0.6779 | 0.6800 |
| 0.561 | 48.96 | 9400 | 0.6063 | 0.6760 | 0.6790 |
| 0.5598 | 50.0 | 9600 | 0.6083 | 0.6773 | 0.6790 |
| 0.5606 | 51.04 | 9800 | 0.6076 | 0.6770 | 0.6790 |
| 0.5586 | 52.08 | 10000 | 0.6076 | 0.6773 | 0.6794 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T00:47:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_16384\_512\_56M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5972
* F1 Score: 0.6840
* Accuracy: 0.6872
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA14
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5488 | 0.09 | 10 | 0.1855 |
| 0.1628 | 0.18 | 20 | 0.1581 |
| 0.1519 | 0.27 | 30 | 0.1629 |
| 0.1574 | 0.36 | 40 | 0.1531 |
| 0.1517 | 0.45 | 50 | 0.1502 |
| 0.1532 | 0.54 | 60 | 0.1478 |
| 0.149 | 0.63 | 70 | 0.1477 |
| 0.1492 | 0.73 | 80 | 0.1547 |
| 0.1464 | 0.82 | 90 | 0.1491 |
| 0.149 | 0.91 | 100 | 0.1511 |
| 0.1511 | 1.0 | 110 | 0.1485 |
| 0.147 | 1.09 | 120 | 0.1478 |
| 0.1475 | 1.18 | 130 | 0.1559 |
| 0.1484 | 1.27 | 140 | 0.1525 |
| 0.1515 | 1.36 | 150 | 0.1506 |
| 0.1459 | 1.45 | 160 | 0.1468 |
| 0.1455 | 1.54 | 170 | 0.1474 |
| 0.1475 | 1.63 | 180 | 0.1475 |
| 0.1468 | 1.72 | 190 | 0.1489 |
| 0.144 | 1.81 | 200 | 0.1465 |
| 0.1444 | 1.9 | 210 | 0.1328 |
| 0.105 | 1.99 | 220 | 0.0724 |
| 0.1351 | 2.08 | 230 | 0.0626 |
| 0.0575 | 2.18 | 240 | 0.0530 |
| 0.0396 | 2.27 | 250 | 0.0528 |
| 0.0463 | 2.36 | 260 | 0.0343 |
| 0.0316 | 2.45 | 270 | 0.0253 |
| 0.0186 | 2.54 | 280 | 0.0185 |
| 0.0338 | 2.63 | 290 | 0.0150 |
| 0.0168 | 2.72 | 300 | 0.0155 |
| 0.0149 | 2.81 | 310 | 0.0128 |
| 0.0198 | 2.9 | 320 | 0.0127 |
| 0.0184 | 2.99 | 330 | 0.0127 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA14", "results": []}]} | Litzy619/O0428HMA14 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T00:48:06+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA14
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0127
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/xqqvt3r | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:48:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA15
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5625 | 0.09 | 10 | 0.2337 |
| 0.1706 | 0.18 | 20 | 0.1519 |
| 0.1507 | 0.27 | 30 | 0.1728 |
| 0.1576 | 0.36 | 40 | 0.1556 |
| 0.1494 | 0.45 | 50 | 0.1485 |
| 0.1503 | 0.54 | 60 | 0.1484 |
| 0.1504 | 0.63 | 70 | 0.1474 |
| 0.1504 | 0.73 | 80 | 0.1569 |
| 0.1468 | 0.82 | 90 | 0.1503 |
| 0.1497 | 0.91 | 100 | 0.1497 |
| 0.153 | 1.0 | 110 | 0.1512 |
| 0.1464 | 1.09 | 120 | 0.1479 |
| 0.1468 | 1.18 | 130 | 0.1521 |
| 0.1486 | 1.27 | 140 | 0.1491 |
| 0.1517 | 1.36 | 150 | 0.1495 |
| 0.1469 | 1.45 | 160 | 0.1515 |
| 0.1479 | 1.54 | 170 | 0.1503 |
| 0.1491 | 1.63 | 180 | 0.1471 |
| 0.148 | 1.72 | 190 | 0.1480 |
| 0.1456 | 1.81 | 200 | 0.1488 |
| 0.1504 | 1.9 | 210 | 0.1493 |
| 0.1468 | 1.99 | 220 | 0.1490 |
| 0.1471 | 2.08 | 230 | 0.1460 |
| 0.1413 | 2.18 | 240 | 0.1468 |
| 0.143 | 2.27 | 250 | 0.1468 |
| 0.1435 | 2.36 | 260 | 0.1476 |
| 0.1426 | 2.45 | 270 | 0.1461 |
| 0.1398 | 2.54 | 280 | 0.1447 |
| 0.1402 | 2.63 | 290 | 0.1446 |
| 0.1426 | 2.72 | 300 | 0.1424 |
| 0.1421 | 2.81 | 310 | 0.1428 |
| 0.1415 | 2.9 | 320 | 0.1428 |
| 0.1419 | 2.99 | 330 | 0.1427 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA15", "results": []}]} | Litzy619/O0428HMA15 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T00:48:50+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA15
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1427
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | transformers |
# Uploaded model
- **Developed by:** robgonsalves
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | robgonsalves/llama-3-8b-Instruct-lora-64-64 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:49:28+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: robgonsalves
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: robgonsalves\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: robgonsalves\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | null |
# andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF
This model was converted to GGUF format from [`maywell/Llama-3-Synatra-11B-v1-20k`](https://huggingface.co/maywell/Llama-3-Synatra-11B-v1-20k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maywell/Llama-3-Synatra-11B-v1-20k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF --model llama-3-synatra-11b-v1-20k.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF --model llama-3-synatra-11b-v1-20k.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-synatra-11b-v1-20k.Q4_K_M.gguf -n 128
```
| {"license": "other", "tags": ["llama-cpp", "gguf-my-repo"], "license_name": "llama3", "license_link": "LICENSE"} | andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:other",
"region:us"
] | null | 2024-04-30T00:49:35+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #license-other #region-us
|
# andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF
This model was converted to GGUF format from 'maywell/Llama-3-Synatra-11B-v1-20k' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'maywell/Llama-3-Synatra-11B-v1-20k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #license-other #region-us \n",
"# andreass123/Llama-3-Synatra-11B-v1-20k-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'maywell/Llama-3-Synatra-11B-v1-20k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_dir
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0625
- Accuracy: 0.9828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-base", "model-index": [{"name": "output_dir", "results": []}]} | tralon/test-v3-new | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:52:13+00:00 | [] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# output_dir
This model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0625
- Accuracy: 0.9828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# output_dir\n\nThis model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0625\n- Accuracy: 0.9828",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# output_dir\n\nThis model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0625\n- Accuracy: 0.9828",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-hausa-sentimental
This model is a fine-tuned version of [Davlan/bert-base-multilingual-cased-finetuned-hausa](https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-hausa) for sentimental analysis in the Hausa langauge.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
- Accuracy: 0.8372093023255814
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"language": ["ha"], "tags": ["generated_from_trainer"], "metrics": [{"accuracy": "0.8372093023255814"}], "base_model": "Davlan/bert-base-multilingual-cased-finetuned-hausa", "widget": [{"text": "Rubuta abun"}], "model-index": [{"name": "bert-hausa-sentimental", "results": []}]} | Oloruntobi/bert-hausa-sentimental | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"ha",
"base_model:Davlan/bert-base-multilingual-cased-finetuned-hausa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T00:53:40+00:00 | [] | [
"ha"
] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #ha #base_model-Davlan/bert-base-multilingual-cased-finetuned-hausa #autotrain_compatible #endpoints_compatible #region-us
|
# bert-hausa-sentimental
This model is a fine-tuned version of Davlan/bert-base-multilingual-cased-finetuned-hausa for sentimental analysis in the Hausa langauge.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
- Accuracy: 0.8372093023255814
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# bert-hausa-sentimental\n\nThis model is a fine-tuned version of Davlan/bert-base-multilingual-cased-finetuned-hausa for sentimental analysis in the Hausa langauge.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results\n\n- Accuracy: 0.8372093023255814",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #ha #base_model-Davlan/bert-base-multilingual-cased-finetuned-hausa #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-hausa-sentimental\n\nThis model is a fine-tuned version of Davlan/bert-base-multilingual-cased-finetuned-hausa for sentimental analysis in the Hausa langauge.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results\n\n- Accuracy: 0.8372093023255814",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
Experimental Test Model of Llama-3-8B (base)
Finetuned using the ChatML formatting. Only 50% of epoch 1 was done out of two epochs.
Extended to 1 Million context using the PoSE technique.
| {"license": "apache-2.0"} | tavtav/Pyg-Llama-8B-1M-0.25 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:53:53+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Experimental Test Model of Llama-3-8B (base)
Finetuned using the ChatML formatting. Only 50% of epoch 1 was done out of two epochs.
Extended to 1 Million context using the PoSE technique.
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | # Smart-Lemon-Cookie-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) as a base.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: SanjiWatsuki/Silicon-Maid-7B
parameters:
density: 1.0
weight: 1.0
- model: SanjiWatsuki/Kunoichi-7B
parameters:
density: 0.4
weight: 1.0
- model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
density: 0.6
weight: 1.0
merge_method: ties
base_model: MTSAIR/multi_verse_model
parameters:
normalize: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge", "mistral", "text-generation"], "base_model": ["SanjiWatsuki/Silicon-Maid-7B", "MTSAIR/multi_verse_model", "SanjiWatsuki/Kunoichi-7B", "KatyTheCutie/LemonadeRP-4.5.3"]} | FallenMerick/Smart-Lemon-Cookie-7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:MTSAIR/multi_verse_model",
"base_model:SanjiWatsuki/Kunoichi-7B",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:54:39+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-SanjiWatsuki/Silicon-Maid-7B #base_model-MTSAIR/multi_verse_model #base_model-SanjiWatsuki/Kunoichi-7B #base_model-KatyTheCutie/LemonadeRP-4.5.3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Smart-Lemon-Cookie-7B
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using MTSAIR/multi_verse_model as a base.
### Models Merged
The following models were included in the merge:
* SanjiWatsuki/Silicon-Maid-7B
* SanjiWatsuki/Kunoichi-7B
* KatyTheCutie/LemonadeRP-4.5.3
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Smart-Lemon-Cookie-7B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using MTSAIR/multi_verse_model as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* SanjiWatsuki/Silicon-Maid-7B\n* SanjiWatsuki/Kunoichi-7B\n* KatyTheCutie/LemonadeRP-4.5.3",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-SanjiWatsuki/Silicon-Maid-7B #base_model-MTSAIR/multi_verse_model #base_model-SanjiWatsuki/Kunoichi-7B #base_model-KatyTheCutie/LemonadeRP-4.5.3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Smart-Lemon-Cookie-7B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using MTSAIR/multi_verse_model as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* SanjiWatsuki/Silicon-Maid-7B\n* SanjiWatsuki/Kunoichi-7B\n* KatyTheCutie/LemonadeRP-4.5.3",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers | <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw2.5-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:54:40+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| [<img src="URL width="200"/>](URL)
Llama-3 8B Gradient Instruct 1048k
==================================
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@URL.
For more info see our End-to-end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
!image/png
Approach:
* meta-llama/Meta-Llama-3-8B-Instruct as the base
* NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
* Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data:
For training data, we generate long contexts by augmenting SlimPajama.
Progressive Training Details:
Quants:
* GGUF
* MLX-4bit
The Gradient AI Team
--------------------
URL
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
----------
Drop an email to contact@URL
References
----------
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] URL
---
Base Model
==========
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.
#### Transformers pipeline
#### Transformers AutoModelForCausalLM
### Use with 'llama3'
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
text-generation | transformers |
## **saiga_llama3_8b**
[exllamav2](https://github.com/turboderp/exllamav2) quant for [IlyaGusev/saiga_llama3_8b](https://huggingface.co/IlyaGusev/saiga_llama3_8b)
**Original model information:**
# Saiga/Llama3 8B, Russian Llama-3-based chatbot
Based on [Llama-3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf)
**ОСТОРОЖНО! WARNING! LET OP!**
I've changed the prompt format from ChatML to **the original Llama-3 format in v4**. Don't forget to switch formats!
**v4**: LLama-3 prompt format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|eot_id|><|start_header_id|>user<|end_header_id|>
Как дела?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Отлично, а у тебя?<|eot_id|><|start_header_id|>user<|end_header_id|>
Шикарно. Как пройти в библиотеку?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
**v2, v3**: ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Как дела?<|im_end|>
<|im_start|>assistant
Отлично, а у тебя?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием хлорофилла, пигмента, который поглощает свет и передает энергию для фотосинтеза. Хлорофилл содержится в клетках листьев растений и играет ключевую роль в процессе фотосинтеза, когда растения используют солнечный свет для превращения воды и углекислого газа в органические вещества, такие как глюкоза и олигоэфиры.
Хлорофилл состоит из двух основных форм: хлорофилла а и хлорофилла б. Хлорофилл а поглощает свет с длиной волны около 430 нанометров (нм), что соответствует синему или голубому свету, а хлорофилл б поглощает свет с длиной волны около 660 нм, что соответствует красному свету. Когда свет поглощается хлорофиллом, он передается в молекулы воды и углекислого газа, которые используются для фотосинтеза.
Таким образом, хлорофилл является основным компонентом, ответственным за зеленый цвет листьев растений, так как он поглощает свет и передает его энергию для фотосинтеза.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды в солнечную погоду на поле для игры в футбол приехала группа друзей, чтобы провести время вместе. В числе их был и Таня, которая была страстной футбольной фанаткой и всегда радовал своих друзей своим энтузиазмом и умением забивать мячи.
В этот день, как обычно, друзья решили провести соревнование между собой, чтобы определить кто из них является лучшим футболистом. Таня была очень уверена в своих силах и готовилась к этому матчу с особой серьёзностью.
Когда все друзья собрались на поле, они увидели, что перед ними стоял огромный мяч, который должен был стать предметом состязания. Мяч был огромным и тяжелым, и его размеры были необычайно большими по сравнению с обычными мячами, которые используются в футболе.
Таня была первая, кто решил начать игру. Она подошла к мячу и начала его удерживать, стараясь выдержать его вес и силу. Но мяч оказался настолько тяжелым, что Таня не смогла удержать его и он упал на землю.
Друзья посмеялись над ее неудачей, но Таня не отчаивалась и продолжила пытаться удержать мяч. Она стала использовать все свои силы и умения, чтобы выдержать его вес и силу. Наконец, после долгих усилий, она смогла удержать мяч и начала его бросать в сторону.
Мяч летел высоко вверх, и друзья смотрели, как он пролетает над полем. Но мяч неожиданно повернул и стал лететь обратно к Тане. Она успела поймать его и продолжила играть, используя все свои навыки и умения.
```
v4:
- [1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dcbs9ttt)
v3:
- [c588356cd60bdee54d52c2dd5a2445acca8aa5c3](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/c588356cd60bdee54d52c2dd5a2445acca8aa5c3)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/ltoqdsal)
v2:
- [ae61b4f9b34fac9856d361ea78c66284a00e4f0b](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/ae61b4f9b34fac9856d361ea78c66284a00e4f0b)
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/huggingface/runs/r6u5juyk)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
# Evaluation
* Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl
* Framework: https://github.com/tatsu-lab/alpaca_eval
* Evaluator: alpaca_eval_cot_gpt4_turbo_fn
| model | length_controlled_winrate | win_rate | standard_error | avg_length |
|-----|-----|-----|-----|-----|
|chatgpt_4_turbo | 76.04 | 90.00 |1.46 | 1270 |
|chatgpt_3_5_turbo | 50.00 | 50.00 | 0.00 | 536 |
|saiga_llama3_8b, v4 | 43.64 | 65.90 | 2.31 | 1200 |
|saiga_llama3_8b, v3 | 36.97 | 61.08 | 2.38 | 1162 |
|saiga_llama3_8b, v2 | 33.07 | 48.19 | 2.45 | 1166 |
|saiga_mistral_7b | 23.38 | 35.99 | 2.34 | 949 | | {"language": ["ru"], "license": "other", "tags": ["llama3", "8-bit"], "base_model": ["IlyaGusev/saiga_llama3_8b"], "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license/"} | Slvcxc/saiga_llama3_8b-V4-8.0bpw-h8-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama3",
"8-bit",
"conversational",
"ru",
"base_model:IlyaGusev/saiga_llama3_8b",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T00:56:09+00:00 | [] | [
"ru"
] | TAGS
#transformers #safetensors #llama #text-generation #llama3 #8-bit #conversational #ru #base_model-IlyaGusev/saiga_llama3_8b #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| saiga\_llama3\_8b
-----------------
exllamav2 quant for IlyaGusev/saiga\_llama3\_8b
Original model information:
Saiga/Llama3 8B, Russian Llama-3-based chatbot
==============================================
Based on Llama-3 8B Instruct.
URL version: link
ОСТОРОЖНО! WARNING! LET OP!
I've changed the prompt format from ChatML to the original Llama-3 format in v4. Don't forget to switch formats!
v4: LLama-3 prompt format:
v2, v3: ChatML prompt format:
Examples:
v4:
* 1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd
* dataset: saiga\_scored, scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
* wandb link
v3:
* c588356cd60bdee54d52c2dd5a2445acca8aa5c3
* dataset: saiga\_scored, scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
* wandb link
v2:
* ae61b4f9b34fac9856d361ea78c66284a00e4f0b
* dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
* wandb link
* 5 datasets: ru\_turbo\_saiga, ru\_sharegpt\_cleaned, oasst1\_ru\_main\_branch, gpt\_roleplay\_realm, ru\_instruct\_gpt4
* Datasets merging script: create\_short\_chat\_set.py
Evaluation
==========
* Dataset: URL
* Framework: URL
* Evaluator: alpaca\_eval\_cot\_gpt4\_turbo\_fn
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #llama3 #8-bit #conversational #ru #base_model-IlyaGusev/saiga_llama3_8b #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | Redliz/csci-566 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-30T00:58:14+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | null |
# andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`maywell/Llama-3-Ko-8B-Instruct`](https://huggingface.co/maywell/Llama-3-Ko-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maywell/Llama-3-Ko-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF --model llama-3-ko-8b-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF --model llama-3-ko-8b-instruct.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-ko-8b-instruct.Q4_K_M.gguf -n 128
```
| {"language": ["en", "ko"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"} | andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-3-ko",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ko",
"license:other",
"region:us"
] | null | 2024-04-30T00:59:10+00:00 | [] | [
"en",
"ko"
] | TAGS
#gguf #facebook #meta #pytorch #llama #llama-3 #llama-3-ko #llama-cpp #gguf-my-repo #text-generation #en #ko #license-other #region-us
|
# andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from 'maywell/Llama-3-Ko-8B-Instruct' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'maywell/Llama-3-Ko-8B-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #facebook #meta #pytorch #llama #llama-3 #llama-3-ko #llama-cpp #gguf-my-repo #text-generation #en #ko #license-other #region-us \n",
"# andreass123/Llama-3-Ko-8B-Instruct-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'maywell/Llama-3-Ko-8B-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA8
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7563 | 0.09 | 10 | 0.1800 |
| 0.1866 | 0.18 | 20 | 0.1584 |
| 0.1527 | 0.27 | 30 | 0.1644 |
| 0.1553 | 0.36 | 40 | 0.1529 |
| 0.1503 | 0.45 | 50 | 0.1481 |
| 0.15 | 0.54 | 60 | 0.1472 |
| 0.1486 | 0.63 | 70 | 0.1470 |
| 0.1491 | 0.73 | 80 | 0.1499 |
| 0.1464 | 0.82 | 90 | 0.1489 |
| 0.1483 | 0.91 | 100 | 0.1503 |
| 0.1502 | 1.0 | 110 | 0.1495 |
| 0.1424 | 1.09 | 120 | 0.1640 |
| 0.1267 | 1.18 | 130 | 0.1005 |
| 0.131 | 1.27 | 140 | 0.1975 |
| 0.1214 | 1.36 | 150 | 0.0764 |
| 0.0707 | 1.45 | 160 | 0.0716 |
| 0.0616 | 1.54 | 170 | 0.0617 |
| 0.0663 | 1.63 | 180 | 0.0601 |
| 0.7161 | 1.72 | 190 | 4.6913 |
| 0.5982 | 1.81 | 200 | 0.0666 |
| 0.0729 | 1.9 | 210 | 0.0584 |
| 0.0632 | 1.99 | 220 | 0.0565 |
| 0.0603 | 2.08 | 230 | 0.0594 |
| 0.0543 | 2.18 | 240 | 0.0587 |
| 0.0564 | 2.27 | 250 | 0.0580 |
| 0.0604 | 2.36 | 260 | 0.0579 |
| 0.0544 | 2.45 | 270 | 0.0565 |
| 0.0526 | 2.54 | 280 | 0.0556 |
| 0.0548 | 2.63 | 290 | 0.0574 |
| 0.0568 | 2.72 | 300 | 0.0552 |
| 0.0568 | 2.81 | 310 | 0.0551 |
| 0.0578 | 2.9 | 320 | 0.0553 |
| 0.0604 | 2.99 | 330 | 0.0553 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA8", "results": []}]} | Litzy619/O0428HMA8 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T01:01:52+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA8
=========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0553
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA19
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3653 | 0.09 | 10 | 0.1662 |
| 0.161 | 0.18 | 20 | 0.1547 |
| 0.1465 | 0.27 | 30 | 0.1611 |
| 0.1549 | 0.36 | 40 | 0.1513 |
| 0.1531 | 0.45 | 50 | 0.1534 |
| 0.1543 | 0.54 | 60 | 0.1653 |
| 0.1577 | 0.63 | 70 | 0.1531 |
| 0.1543 | 0.73 | 80 | 0.1564 |
| 0.1478 | 0.82 | 90 | 0.1518 |
| 0.1492 | 0.91 | 100 | 0.1490 |
| 0.1513 | 1.0 | 110 | 0.1487 |
| 0.1461 | 1.09 | 120 | 0.1484 |
| 0.1476 | 1.18 | 130 | 0.1532 |
| 0.1479 | 1.27 | 140 | 0.1507 |
| 0.1494 | 1.36 | 150 | 0.1488 |
| 0.1453 | 1.45 | 160 | 0.1479 |
| 0.1461 | 1.54 | 170 | 0.1481 |
| 0.1482 | 1.63 | 180 | 0.1464 |
| 0.1481 | 1.72 | 190 | 0.1528 |
| 0.147 | 1.81 | 200 | 0.1477 |
| 0.1484 | 1.9 | 210 | 0.1474 |
| 0.1458 | 1.99 | 220 | 0.1489 |
| 0.1462 | 2.08 | 230 | 0.1475 |
| 0.1408 | 2.18 | 240 | 0.1453 |
| 0.1426 | 2.27 | 250 | 0.1462 |
| 0.1441 | 2.36 | 260 | 0.1470 |
| 0.1419 | 2.45 | 270 | 0.1451 |
| 0.1396 | 2.54 | 280 | 0.1450 |
| 0.1404 | 2.63 | 290 | 0.1449 |
| 0.1433 | 2.72 | 300 | 0.1434 |
| 0.1419 | 2.81 | 310 | 0.1432 |
| 0.1414 | 2.9 | 320 | 0.1433 |
| 0.1431 | 2.99 | 330 | 0.1434 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA19", "results": []}]} | Litzy619/O0428HMA19 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T01:02:43+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA19
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1434
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 60
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | transformers |
# andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF
This model was converted to GGUF format from [`lemon-mint/gemma-ko-1.1-2b-it-2`](https://huggingface.co/lemon-mint/gemma-ko-1.1-2b-it-2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lemon-mint/gemma-ko-1.1-2b-it-2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF --model gemma-ko-1.1-2b-it-2.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF --model gemma-ko-1.1-2b-it-2.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-ko-1.1-2b-it-2.Q4_K_M.gguf -n 128
```
| {"license": "gemma", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["beomi/gemma-ko-2b", "google/gemma-1.1-2b-it", "google/gemma-2b"]} | andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:beomi/gemma-ko-2b",
"base_model:google/gemma-1.1-2b-it",
"base_model:google/gemma-2b",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:04:48+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-beomi/gemma-ko-2b #base_model-google/gemma-1.1-2b-it #base_model-google/gemma-2b #license-gemma #endpoints_compatible #region-us
|
# andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF
This model was converted to GGUF format from 'lemon-mint/gemma-ko-1.1-2b-it-2' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'lemon-mint/gemma-ko-1.1-2b-it-2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-beomi/gemma-ko-2b #base_model-google/gemma-1.1-2b-it #base_model-google/gemma-2b #license-gemma #endpoints_compatible #region-us \n",
"# andreass123/gemma-ko-1.1-2b-it-2-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'lemon-mint/gemma-ko-1.1-2b-it-2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null | Finetune [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on [nielsr/docvqa_1200_examples dataset](https://huggingface.co/datasets/nielsr/docvqa_1200_examples) | {} | Thanhstar/Idefics2-8b-multimodal | null | [
"safetensors",
"region:us"
] | null | 2024-04-30T01:05:25+00:00 | [] | [] | TAGS
#safetensors #region-us
| Finetune HuggingFaceM4/idefics2-8b on nielsr/docvqa_1200_examples dataset | [] | [
"TAGS\n#safetensors #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5991
- F1 Score: 0.6846
- Accuracy: 0.6888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6391 | 1.04 | 200 | 0.6195 | 0.6206 | 0.6562 |
| 0.6074 | 2.08 | 400 | 0.6297 | 0.6565 | 0.6540 |
| 0.5975 | 3.12 | 600 | 0.5994 | 0.6742 | 0.6820 |
| 0.5937 | 4.17 | 800 | 0.6089 | 0.6670 | 0.6673 |
| 0.5889 | 5.21 | 1000 | 0.6060 | 0.6689 | 0.6781 |
| 0.5855 | 6.25 | 1200 | 0.5963 | 0.6736 | 0.6820 |
| 0.5777 | 7.29 | 1400 | 0.5950 | 0.6811 | 0.6849 |
| 0.5782 | 8.33 | 1600 | 0.6053 | 0.6769 | 0.6777 |
| 0.5733 | 9.38 | 1800 | 0.6000 | 0.6743 | 0.6745 |
| 0.5691 | 10.42 | 2000 | 0.5975 | 0.6730 | 0.6843 |
| 0.5691 | 11.46 | 2200 | 0.5956 | 0.6726 | 0.6859 |
| 0.5583 | 12.5 | 2400 | 0.6013 | 0.6764 | 0.6846 |
| 0.5588 | 13.54 | 2600 | 0.6029 | 0.6834 | 0.6852 |
| 0.5587 | 14.58 | 2800 | 0.6056 | 0.6715 | 0.6856 |
| 0.5565 | 15.62 | 3000 | 0.5997 | 0.6864 | 0.6908 |
| 0.5473 | 16.67 | 3200 | 0.6002 | 0.6847 | 0.6911 |
| 0.5471 | 17.71 | 3400 | 0.6040 | 0.6811 | 0.6820 |
| 0.5459 | 18.75 | 3600 | 0.6023 | 0.6906 | 0.6947 |
| 0.544 | 19.79 | 3800 | 0.6026 | 0.6819 | 0.6830 |
| 0.5368 | 20.83 | 4000 | 0.5968 | 0.6882 | 0.6937 |
| 0.5393 | 21.88 | 4200 | 0.6100 | 0.6830 | 0.6836 |
| 0.5357 | 22.92 | 4400 | 0.6072 | 0.6861 | 0.6882 |
| 0.5333 | 23.96 | 4600 | 0.6071 | 0.6864 | 0.6859 |
| 0.5287 | 25.0 | 4800 | 0.6097 | 0.6803 | 0.6810 |
| 0.5251 | 26.04 | 5000 | 0.6108 | 0.6854 | 0.6878 |
| 0.5247 | 27.08 | 5200 | 0.6076 | 0.6877 | 0.6885 |
| 0.5213 | 28.12 | 5400 | 0.6099 | 0.6815 | 0.6826 |
| 0.5185 | 29.17 | 5600 | 0.6080 | 0.6862 | 0.6895 |
| 0.5158 | 30.21 | 5800 | 0.6185 | 0.6785 | 0.6810 |
| 0.5142 | 31.25 | 6000 | 0.6114 | 0.6845 | 0.6872 |
| 0.5144 | 32.29 | 6200 | 0.6159 | 0.6741 | 0.6755 |
| 0.5152 | 33.33 | 6400 | 0.6179 | 0.6809 | 0.6804 |
| 0.5086 | 34.38 | 6600 | 0.6245 | 0.6812 | 0.6875 |
| 0.5058 | 35.42 | 6800 | 0.6241 | 0.6732 | 0.6732 |
| 0.5064 | 36.46 | 7000 | 0.6216 | 0.6814 | 0.6813 |
| 0.5041 | 37.5 | 7200 | 0.6184 | 0.6840 | 0.6856 |
| 0.4978 | 38.54 | 7400 | 0.6274 | 0.6766 | 0.6794 |
| 0.5034 | 39.58 | 7600 | 0.6228 | 0.6807 | 0.6865 |
| 0.5008 | 40.62 | 7800 | 0.6338 | 0.6775 | 0.6764 |
| 0.4983 | 41.67 | 8000 | 0.6237 | 0.6793 | 0.6790 |
| 0.4917 | 42.71 | 8200 | 0.6340 | 0.6759 | 0.6771 |
| 0.4972 | 43.75 | 8400 | 0.6290 | 0.6775 | 0.6768 |
| 0.4955 | 44.79 | 8600 | 0.6269 | 0.6775 | 0.6777 |
| 0.4902 | 45.83 | 8800 | 0.6266 | 0.6778 | 0.6794 |
| 0.4928 | 46.88 | 9000 | 0.6318 | 0.6731 | 0.6722 |
| 0.4917 | 47.92 | 9200 | 0.6288 | 0.6773 | 0.6771 |
| 0.4911 | 48.96 | 9400 | 0.6282 | 0.6727 | 0.6738 |
| 0.4903 | 50.0 | 9600 | 0.6311 | 0.6742 | 0.6742 |
| 0.4888 | 51.04 | 9800 | 0.6316 | 0.6721 | 0.6722 |
| 0.4845 | 52.08 | 10000 | 0.6314 | 0.6734 | 0.6735 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:08:30+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_16384\_512\_56M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5991
* F1 Score: 0.6846
* Accuracy: 0.6888
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw3-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"region:us"
] | null | 2024-04-30T01:09:04+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
| [<img src="URL width="200"/>](URL)
Llama-3 8B Gradient Instruct 1048k
==================================
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@URL.
For more info see our End-to-end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
!image/png
Approach:
* meta-llama/Meta-Llama-3-8B-Instruct as the base
* NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
* Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data:
For training data, we generate long contexts by augmenting SlimPajama.
Progressive Training Details:
Quants:
* GGUF
* MLX-4bit
The Gradient AI Team
--------------------
URL
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
----------
Drop an email to contact@URL
References
----------
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] URL
---
Base Model
==========
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.
#### Transformers pipeline
#### Transformers AutoModelForCausalLM
### Use with 'llama3'
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n",
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5947
- F1 Score: 0.6824
- Accuracy: 0.6891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.636 | 1.04 | 200 | 0.6087 | 0.6525 | 0.6683 |
| 0.6027 | 2.08 | 400 | 0.6371 | 0.6472 | 0.6448 |
| 0.5907 | 3.12 | 600 | 0.6005 | 0.6783 | 0.6839 |
| 0.5864 | 4.17 | 800 | 0.5999 | 0.6793 | 0.6807 |
| 0.5755 | 5.21 | 1000 | 0.6054 | 0.6726 | 0.6768 |
| 0.5691 | 6.25 | 1200 | 0.5921 | 0.6819 | 0.6856 |
| 0.5599 | 7.29 | 1400 | 0.5950 | 0.6850 | 0.6914 |
| 0.5552 | 8.33 | 1600 | 0.6075 | 0.6805 | 0.6817 |
| 0.5444 | 9.38 | 1800 | 0.6082 | 0.6826 | 0.6813 |
| 0.5373 | 10.42 | 2000 | 0.6086 | 0.6776 | 0.6820 |
| 0.5321 | 11.46 | 2200 | 0.6017 | 0.6780 | 0.6856 |
| 0.5172 | 12.5 | 2400 | 0.6228 | 0.6803 | 0.6852 |
| 0.508 | 13.54 | 2600 | 0.6226 | 0.6767 | 0.6771 |
| 0.504 | 14.58 | 2800 | 0.6280 | 0.6654 | 0.6686 |
| 0.4956 | 15.62 | 3000 | 0.6334 | 0.6742 | 0.6742 |
| 0.4791 | 16.67 | 3200 | 0.6394 | 0.6778 | 0.6794 |
| 0.4715 | 17.71 | 3400 | 0.6500 | 0.6612 | 0.6592 |
| 0.4647 | 18.75 | 3600 | 0.6655 | 0.6715 | 0.6699 |
| 0.4541 | 19.79 | 3800 | 0.6991 | 0.6647 | 0.6628 |
| 0.4432 | 20.83 | 4000 | 0.6577 | 0.6676 | 0.6667 |
| 0.4384 | 21.88 | 4200 | 0.7055 | 0.6646 | 0.6631 |
| 0.4261 | 22.92 | 4400 | 0.7187 | 0.6459 | 0.6432 |
| 0.4209 | 23.96 | 4600 | 0.6900 | 0.6736 | 0.6735 |
| 0.4069 | 25.0 | 4800 | 0.7107 | 0.6618 | 0.6601 |
| 0.3971 | 26.04 | 5000 | 0.7382 | 0.6633 | 0.6618 |
| 0.3943 | 27.08 | 5200 | 0.7328 | 0.6578 | 0.6556 |
| 0.3864 | 28.12 | 5400 | 0.7531 | 0.6634 | 0.6611 |
| 0.3762 | 29.17 | 5600 | 0.7479 | 0.6719 | 0.6722 |
| 0.3672 | 30.21 | 5800 | 0.7751 | 0.6542 | 0.6520 |
| 0.36 | 31.25 | 6000 | 0.7859 | 0.6605 | 0.6588 |
| 0.3614 | 32.29 | 6200 | 0.7895 | 0.6550 | 0.6527 |
| 0.3536 | 33.33 | 6400 | 0.7837 | 0.6685 | 0.6676 |
| 0.3483 | 34.38 | 6600 | 0.7955 | 0.6657 | 0.6667 |
| 0.3391 | 35.42 | 6800 | 0.8129 | 0.6653 | 0.6641 |
| 0.3407 | 36.46 | 7000 | 0.7978 | 0.6617 | 0.6595 |
| 0.3335 | 37.5 | 7200 | 0.8079 | 0.6648 | 0.6644 |
| 0.3227 | 38.54 | 7400 | 0.8304 | 0.6615 | 0.6615 |
| 0.3291 | 39.58 | 7600 | 0.8175 | 0.6639 | 0.6647 |
| 0.321 | 40.62 | 7800 | 0.8559 | 0.6608 | 0.6585 |
| 0.3141 | 41.67 | 8000 | 0.8459 | 0.6617 | 0.6605 |
| 0.3091 | 42.71 | 8200 | 0.8625 | 0.6639 | 0.6637 |
| 0.3056 | 43.75 | 8400 | 0.8581 | 0.6616 | 0.6598 |
| 0.3027 | 44.79 | 8600 | 0.8863 | 0.6657 | 0.6641 |
| 0.2968 | 45.83 | 8800 | 0.8766 | 0.6633 | 0.6611 |
| 0.2979 | 46.88 | 9000 | 0.8802 | 0.6583 | 0.6559 |
| 0.2954 | 47.92 | 9200 | 0.8858 | 0.6617 | 0.6598 |
| 0.2919 | 48.96 | 9400 | 0.8817 | 0.6678 | 0.6667 |
| 0.2894 | 50.0 | 9600 | 0.8913 | 0.6630 | 0.6611 |
| 0.2894 | 51.04 | 9800 | 0.8927 | 0.6666 | 0.6650 |
| 0.2844 | 52.08 | 10000 | 0.8937 | 0.6663 | 0.6647 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:09:31+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_16384\_512\_56M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5947
* F1 Score: 0.6824
* Accuracy: 0.6891
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4756
- F1 Score: 0.7781
- Accuracy: 0.7776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5856 | 1.15 | 200 | 0.5516 | 0.7251 | 0.7272 |
| 0.5437 | 2.3 | 400 | 0.5871 | 0.6926 | 0.6977 |
| 0.5193 | 3.45 | 600 | 0.5526 | 0.7251 | 0.7262 |
| 0.5138 | 4.6 | 800 | 0.5540 | 0.7213 | 0.7229 |
| 0.5066 | 5.75 | 1000 | 0.5250 | 0.7486 | 0.7481 |
| 0.4989 | 6.9 | 1200 | 0.5327 | 0.7470 | 0.7467 |
| 0.4933 | 8.05 | 1400 | 0.5393 | 0.7398 | 0.7398 |
| 0.4915 | 9.2 | 1600 | 0.5634 | 0.7221 | 0.7258 |
| 0.4827 | 10.34 | 1800 | 0.5314 | 0.7425 | 0.7424 |
| 0.4864 | 11.49 | 2000 | 0.5129 | 0.7586 | 0.7582 |
| 0.476 | 12.64 | 2200 | 0.5526 | 0.7235 | 0.7265 |
| 0.4789 | 13.79 | 2400 | 0.5209 | 0.7424 | 0.7427 |
| 0.4746 | 14.94 | 2600 | 0.5149 | 0.7450 | 0.7452 |
| 0.4726 | 16.09 | 2800 | 0.5087 | 0.7532 | 0.7531 |
| 0.4701 | 17.24 | 3000 | 0.5291 | 0.7429 | 0.7434 |
| 0.4653 | 18.39 | 3200 | 0.5184 | 0.7429 | 0.7438 |
| 0.4663 | 19.54 | 3400 | 0.5100 | 0.7472 | 0.7481 |
| 0.463 | 20.69 | 3600 | 0.5015 | 0.7633 | 0.7629 |
| 0.4636 | 21.84 | 3800 | 0.5214 | 0.7373 | 0.7391 |
| 0.46 | 22.99 | 4000 | 0.5220 | 0.7375 | 0.7395 |
| 0.4624 | 24.14 | 4200 | 0.4973 | 0.7628 | 0.7625 |
| 0.4533 | 25.29 | 4400 | 0.5217 | 0.7512 | 0.7517 |
| 0.461 | 26.44 | 4600 | 0.5081 | 0.7574 | 0.7575 |
| 0.4552 | 27.59 | 4800 | 0.5101 | 0.7526 | 0.7531 |
| 0.4525 | 28.74 | 5000 | 0.5097 | 0.7493 | 0.7503 |
| 0.4569 | 29.89 | 5200 | 0.5063 | 0.7617 | 0.7618 |
| 0.4494 | 31.03 | 5400 | 0.5174 | 0.7449 | 0.7463 |
| 0.4531 | 32.18 | 5600 | 0.4900 | 0.7669 | 0.7665 |
| 0.4438 | 33.33 | 5800 | 0.5002 | 0.7671 | 0.7668 |
| 0.4539 | 34.48 | 6000 | 0.5053 | 0.7548 | 0.7553 |
| 0.4429 | 35.63 | 6200 | 0.4950 | 0.7679 | 0.7675 |
| 0.4503 | 36.78 | 6400 | 0.4991 | 0.7635 | 0.7636 |
| 0.4449 | 37.93 | 6600 | 0.5143 | 0.7543 | 0.7549 |
| 0.4454 | 39.08 | 6800 | 0.4985 | 0.7660 | 0.7661 |
| 0.4446 | 40.23 | 7000 | 0.5068 | 0.7601 | 0.7607 |
| 0.4443 | 41.38 | 7200 | 0.5043 | 0.7607 | 0.7611 |
| 0.4445 | 42.53 | 7400 | 0.5047 | 0.7618 | 0.7621 |
| 0.4415 | 43.68 | 7600 | 0.5023 | 0.7626 | 0.7629 |
| 0.4388 | 44.83 | 7800 | 0.5066 | 0.7587 | 0.7593 |
| 0.4428 | 45.98 | 8000 | 0.4992 | 0.7662 | 0.7661 |
| 0.4446 | 47.13 | 8200 | 0.5115 | 0.7582 | 0.7589 |
| 0.4398 | 48.28 | 8400 | 0.5004 | 0.7646 | 0.7647 |
| 0.4361 | 49.43 | 8600 | 0.5021 | 0.7613 | 0.7614 |
| 0.4399 | 50.57 | 8800 | 0.5032 | 0.7650 | 0.7650 |
| 0.4384 | 51.72 | 9000 | 0.5080 | 0.7614 | 0.7618 |
| 0.4373 | 52.87 | 9200 | 0.5105 | 0.7562 | 0.7567 |
| 0.44 | 54.02 | 9400 | 0.5097 | 0.7608 | 0.7614 |
| 0.4384 | 55.17 | 9600 | 0.5087 | 0.7598 | 0.7603 |
| 0.4352 | 56.32 | 9800 | 0.5021 | 0.7627 | 0.7629 |
| 0.4381 | 57.47 | 10000 | 0.5048 | 0.7622 | 0.7625 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:09:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_16384\_512\_56M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4756
* F1 Score: 0.7781
* Accuracy: 0.7776
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | patelmiteshn/Llama3-3-8B-Instruct-ft-loraAdap | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:10:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# Experiment27pasticheNeuralsynthesis-7B
Experiment27pasticheNeuralsynthesis-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: automerger/Experiment27Pastiche-7B
- model: Kukedlc/NeuralSynthesis-7B-v0.1
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment27pasticheNeuralsynthesis-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/Experiment27pasticheNeuralsynthesis-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T01:11:08+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# Experiment27pasticheNeuralsynthesis-7B
Experiment27pasticheNeuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# Experiment27pasticheNeuralsynthesis-7B\n\nExperiment27pasticheNeuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# Experiment27pasticheNeuralsynthesis-7B\n\nExperiment27pasticheNeuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9137
- Accuracy: 0.7434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 293 | 0.4585 | 0.7766 |
| 0.5555 | 2.0 | 586 | 0.5768 | 0.7200 |
| 0.5555 | 3.0 | 879 | 0.6414 | 0.7065 |
| 0.3357 | 4.0 | 1172 | 0.7783 | 0.7328 |
| 0.3357 | 5.0 | 1465 | 0.9137 | 0.7434 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2", "results": []}]} | DenysZakharkevych/albert-base-v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:11:19+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #albert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| albert-base-v2
==============
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9137
* Accuracy: 0.7434
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #albert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/dz5jzuu | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:11:31+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# style-mixed-llama3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "style-mixed-llama3", "results": []}]} | KevinLiuR/style-mixed-llama3 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2024-04-30T01:13:12+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us
|
# style-mixed-llama3
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# style-mixed-llama3\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 9\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us \n",
"# style-mixed-llama3\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 9\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/nobc630 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:14:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Danielbrdz/Barcenas-Llama3-8b-ORPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Barcenas-Llama3-8b-ORPO-GGUF/resolve/main/Barcenas-Llama3-8b-ORPO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["llama", "ORPO"], "base_model": "Danielbrdz/Barcenas-Llama3-8b-ORPO", "license_link": "LICENSE", "license_name": "llama3", "quantized_by": "mradermacher"} | mradermacher/Barcenas-Llama3-8b-ORPO-GGUF | null | [
"transformers",
"gguf",
"llama",
"ORPO",
"en",
"base_model:Danielbrdz/Barcenas-Llama3-8b-ORPO",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:14:03+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #ORPO #en #base_model-Danielbrdz/Barcenas-Llama3-8b-ORPO #license-other #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #llama #ORPO #en #base_model-Danielbrdz/Barcenas-Llama3-8b-ORPO #license-other #endpoints_compatible #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.