pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | devesh220897/financial-chatbot-for-young-adults-4 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T00:28:21+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3479
- F1 Score: 0.8611
- Accuracy: 0.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9345 | 0.7 | 200 | 0.8741 | 0.5165 | 0.5761 |
| 0.7904 | 1.4 | 400 | 0.5650 | 0.7493 | 0.7519 |
| 0.5261 | 2.1 | 600 | 0.4778 | 0.7942 | 0.7937 |
| 0.478 | 2.8 | 800 | 0.4621 | 0.8103 | 0.8097 |
| 0.4593 | 3.5 | 1000 | 0.4522 | 0.8124 | 0.8115 |
| 0.4463 | 4.2 | 1200 | 0.4375 | 0.8194 | 0.8181 |
| 0.4347 | 4.9 | 1400 | 0.4279 | 0.8234 | 0.8224 |
| 0.4273 | 5.59 | 1600 | 0.4331 | 0.8230 | 0.8227 |
| 0.4151 | 6.29 | 1800 | 0.4419 | 0.8198 | 0.8185 |
| 0.4036 | 6.99 | 2000 | 0.4257 | 0.8270 | 0.8255 |
| 0.3988 | 7.69 | 2200 | 0.3926 | 0.8380 | 0.8371 |
| 0.3947 | 8.39 | 2400 | 0.4171 | 0.8293 | 0.8281 |
| 0.3959 | 9.09 | 2600 | 0.4144 | 0.8316 | 0.8301 |
| 0.3848 | 9.79 | 2800 | 0.3916 | 0.8441 | 0.8431 |
| 0.3842 | 10.49 | 3000 | 0.3868 | 0.8481 | 0.8472 |
| 0.3726 | 11.19 | 3200 | 0.4221 | 0.8294 | 0.8284 |
| 0.375 | 11.89 | 3400 | 0.3941 | 0.8421 | 0.8409 |
| 0.3636 | 12.59 | 3600 | 0.3826 | 0.8468 | 0.8461 |
| 0.3693 | 13.29 | 3800 | 0.3817 | 0.8479 | 0.8470 |
| 0.3604 | 13.99 | 4000 | 0.3992 | 0.8427 | 0.8415 |
| 0.3566 | 14.69 | 4200 | 0.3819 | 0.8512 | 0.8503 |
| 0.3539 | 15.38 | 4400 | 0.3803 | 0.8516 | 0.8507 |
| 0.3463 | 16.08 | 4600 | 0.4195 | 0.8349 | 0.8338 |
| 0.3498 | 16.78 | 4800 | 0.3813 | 0.8503 | 0.8494 |
| 0.345 | 17.48 | 5000 | 0.3904 | 0.8484 | 0.8472 |
| 0.3447 | 18.18 | 5200 | 0.3690 | 0.8580 | 0.8573 |
| 0.3424 | 18.88 | 5400 | 0.3667 | 0.8582 | 0.8575 |
| 0.3325 | 19.58 | 5600 | 0.3646 | 0.8591 | 0.8584 |
| 0.3418 | 20.28 | 5800 | 0.3595 | 0.8615 | 0.8608 |
| 0.336 | 20.98 | 6000 | 0.3636 | 0.8581 | 0.8573 |
| 0.3391 | 21.68 | 6200 | 0.3748 | 0.8532 | 0.8525 |
| 0.3304 | 22.38 | 6400 | 0.3715 | 0.8561 | 0.8553 |
| 0.3272 | 23.08 | 6600 | 0.3687 | 0.8565 | 0.8555 |
| 0.3302 | 23.78 | 6800 | 0.3691 | 0.8571 | 0.8562 |
| 0.3278 | 24.48 | 7000 | 0.3756 | 0.8573 | 0.8564 |
| 0.3219 | 25.17 | 7200 | 0.3740 | 0.8538 | 0.8529 |
| 0.3232 | 25.87 | 7400 | 0.3772 | 0.8535 | 0.8525 |
| 0.322 | 26.57 | 7600 | 0.3782 | 0.8552 | 0.8542 |
| 0.3195 | 27.27 | 7800 | 0.3701 | 0.8566 | 0.8558 |
| 0.3248 | 27.97 | 8000 | 0.3707 | 0.8558 | 0.8549 |
| 0.3255 | 28.67 | 8200 | 0.3815 | 0.8519 | 0.8509 |
| 0.318 | 29.37 | 8400 | 0.3691 | 0.8553 | 0.8544 |
| 0.3154 | 30.07 | 8600 | 0.3580 | 0.8620 | 0.8612 |
| 0.3157 | 30.77 | 8800 | 0.3694 | 0.8551 | 0.8542 |
| 0.3171 | 31.47 | 9000 | 0.3643 | 0.8562 | 0.8553 |
| 0.3198 | 32.17 | 9200 | 0.3615 | 0.8588 | 0.8580 |
| 0.3182 | 32.87 | 9400 | 0.3648 | 0.8578 | 0.8571 |
| 0.316 | 33.57 | 9600 | 0.3717 | 0.8543 | 0.8534 |
| 0.3129 | 34.27 | 9800 | 0.3659 | 0.8577 | 0.8569 |
| 0.3166 | 34.97 | 10000 | 0.3673 | 0.8564 | 0.8555 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:28:25+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_splice\_reconstructed-seqsight\_4096\_512\_46M-L1\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3479
* F1 Score: 0.8611
* Accuracy: 0.8604
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | rPucs/gemma-7b-itTripletDolly-WebNLG | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T00:30:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jsingh/autoflow-math-v0.3 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T00:32:14+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3085
- F1 Score: 0.8848
- Accuracy: 0.8843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9058 | 0.7 | 200 | 0.7893 | 0.6650 | 0.6637 |
| 0.5287 | 1.4 | 400 | 0.4440 | 0.8207 | 0.8196 |
| 0.4396 | 2.1 | 600 | 0.4073 | 0.8350 | 0.8338 |
| 0.4028 | 2.8 | 800 | 0.4125 | 0.8360 | 0.8352 |
| 0.387 | 3.5 | 1000 | 0.3674 | 0.8587 | 0.8582 |
| 0.3728 | 4.2 | 1200 | 0.3927 | 0.8439 | 0.8426 |
| 0.3595 | 4.9 | 1400 | 0.3759 | 0.8529 | 0.8520 |
| 0.3518 | 5.59 | 1600 | 0.3751 | 0.8604 | 0.8604 |
| 0.3369 | 6.29 | 1800 | 0.3757 | 0.8575 | 0.8564 |
| 0.3277 | 6.99 | 2000 | 0.3672 | 0.8590 | 0.8580 |
| 0.3213 | 7.69 | 2200 | 0.3529 | 0.8647 | 0.8639 |
| 0.318 | 8.39 | 2400 | 0.3513 | 0.8681 | 0.8674 |
| 0.3154 | 9.09 | 2600 | 0.3567 | 0.8653 | 0.8643 |
| 0.3012 | 9.79 | 2800 | 0.3442 | 0.8697 | 0.8689 |
| 0.3008 | 10.49 | 3000 | 0.3317 | 0.8769 | 0.8764 |
| 0.2955 | 11.19 | 3200 | 0.3715 | 0.8606 | 0.8597 |
| 0.2961 | 11.89 | 3400 | 0.3457 | 0.8651 | 0.8641 |
| 0.2847 | 12.59 | 3600 | 0.3518 | 0.8628 | 0.8621 |
| 0.289 | 13.29 | 3800 | 0.3271 | 0.8747 | 0.8742 |
| 0.2789 | 13.99 | 4000 | 0.3435 | 0.8743 | 0.8735 |
| 0.2741 | 14.69 | 4200 | 0.3318 | 0.8768 | 0.8762 |
| 0.2738 | 15.38 | 4400 | 0.3299 | 0.8770 | 0.8764 |
| 0.2676 | 16.08 | 4600 | 0.3491 | 0.8695 | 0.8687 |
| 0.2718 | 16.78 | 4800 | 0.3407 | 0.8741 | 0.8733 |
| 0.2597 | 17.48 | 5000 | 0.3563 | 0.8644 | 0.8637 |
| 0.264 | 18.18 | 5200 | 0.3287 | 0.8805 | 0.8799 |
| 0.2607 | 18.88 | 5400 | 0.3233 | 0.8793 | 0.8788 |
| 0.2502 | 19.58 | 5600 | 0.3305 | 0.8790 | 0.8783 |
| 0.2586 | 20.28 | 5800 | 0.3144 | 0.8865 | 0.8860 |
| 0.2536 | 20.98 | 6000 | 0.3183 | 0.8865 | 0.8860 |
| 0.254 | 21.68 | 6200 | 0.3390 | 0.8742 | 0.8735 |
| 0.249 | 22.38 | 6400 | 0.3305 | 0.8790 | 0.8783 |
| 0.2439 | 23.08 | 6600 | 0.3274 | 0.8805 | 0.8799 |
| 0.2437 | 23.78 | 6800 | 0.3310 | 0.8823 | 0.8816 |
| 0.2391 | 24.48 | 7000 | 0.3360 | 0.8790 | 0.8783 |
| 0.2398 | 25.17 | 7200 | 0.3339 | 0.8789 | 0.8783 |
| 0.2389 | 25.87 | 7400 | 0.3430 | 0.8775 | 0.8768 |
| 0.2386 | 26.57 | 7600 | 0.3329 | 0.8805 | 0.8799 |
| 0.2317 | 27.27 | 7800 | 0.3397 | 0.8786 | 0.8779 |
| 0.237 | 27.97 | 8000 | 0.3363 | 0.8821 | 0.8814 |
| 0.2378 | 28.67 | 8200 | 0.3350 | 0.8825 | 0.8819 |
| 0.2293 | 29.37 | 8400 | 0.3259 | 0.8855 | 0.8849 |
| 0.2276 | 30.07 | 8600 | 0.3260 | 0.8853 | 0.8847 |
| 0.2258 | 30.77 | 8800 | 0.3317 | 0.8840 | 0.8834 |
| 0.2307 | 31.47 | 9000 | 0.3354 | 0.8834 | 0.8827 |
| 0.2279 | 32.17 | 9200 | 0.3311 | 0.8842 | 0.8836 |
| 0.2301 | 32.87 | 9400 | 0.3304 | 0.8820 | 0.8814 |
| 0.2277 | 33.57 | 9600 | 0.3398 | 0.8812 | 0.8805 |
| 0.2225 | 34.27 | 9800 | 0.3342 | 0.8829 | 0.8823 |
| 0.2288 | 34.97 | 10000 | 0.3341 | 0.8827 | 0.8821 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:33:37+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_splice\_reconstructed-seqsight\_4096\_512\_46M-L8\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3085
* F1 Score: 0.8848
* Accuracy: 0.8843
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2844
- F1 Score: 0.9056
- Accuracy: 0.9053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.8179 | 0.7 | 200 | 0.5204 | 0.7837 | 0.7830 |
| 0.4352 | 1.4 | 400 | 0.3973 | 0.8452 | 0.8446 |
| 0.3886 | 2.1 | 600 | 0.3784 | 0.8550 | 0.8540 |
| 0.3519 | 2.8 | 800 | 0.3853 | 0.8541 | 0.8534 |
| 0.3325 | 3.5 | 1000 | 0.3315 | 0.8748 | 0.8744 |
| 0.318 | 4.2 | 1200 | 0.3462 | 0.8679 | 0.8669 |
| 0.3053 | 4.9 | 1400 | 0.3358 | 0.8734 | 0.8726 |
| 0.2965 | 5.59 | 1600 | 0.3560 | 0.8635 | 0.8634 |
| 0.2831 | 6.29 | 1800 | 0.3516 | 0.8673 | 0.8665 |
| 0.2708 | 6.99 | 2000 | 0.3149 | 0.8816 | 0.8812 |
| 0.2641 | 7.69 | 2200 | 0.3367 | 0.8767 | 0.8759 |
| 0.2582 | 8.39 | 2400 | 0.3258 | 0.8779 | 0.8775 |
| 0.2553 | 9.09 | 2600 | 0.3211 | 0.8842 | 0.8834 |
| 0.2426 | 9.79 | 2800 | 0.3137 | 0.8889 | 0.8882 |
| 0.2364 | 10.49 | 3000 | 0.2944 | 0.8936 | 0.8932 |
| 0.2299 | 11.19 | 3200 | 0.3200 | 0.8847 | 0.8840 |
| 0.2285 | 11.89 | 3400 | 0.3317 | 0.8815 | 0.8808 |
| 0.2136 | 12.59 | 3600 | 0.3326 | 0.8763 | 0.8757 |
| 0.2152 | 13.29 | 3800 | 0.3053 | 0.8900 | 0.8895 |
| 0.2078 | 13.99 | 4000 | 0.3316 | 0.8802 | 0.8794 |
| 0.2028 | 14.69 | 4200 | 0.3239 | 0.8866 | 0.8858 |
| 0.2028 | 15.38 | 4400 | 0.3240 | 0.8852 | 0.8847 |
| 0.1888 | 16.08 | 4600 | 0.3376 | 0.8841 | 0.8834 |
| 0.1919 | 16.78 | 4800 | 0.3129 | 0.8903 | 0.8897 |
| 0.1792 | 17.48 | 5000 | 0.3338 | 0.8873 | 0.8867 |
| 0.1846 | 18.18 | 5200 | 0.3108 | 0.8934 | 0.8930 |
| 0.1782 | 18.88 | 5400 | 0.3161 | 0.8889 | 0.8884 |
| 0.1699 | 19.58 | 5600 | 0.3339 | 0.8909 | 0.8904 |
| 0.1747 | 20.28 | 5800 | 0.3043 | 0.8986 | 0.8983 |
| 0.1631 | 20.98 | 6000 | 0.3052 | 0.9013 | 0.9009 |
| 0.1636 | 21.68 | 6200 | 0.3347 | 0.8882 | 0.8878 |
| 0.1586 | 22.38 | 6400 | 0.3218 | 0.8937 | 0.8932 |
| 0.157 | 23.08 | 6600 | 0.3203 | 0.8931 | 0.8926 |
| 0.1546 | 23.78 | 6800 | 0.3180 | 0.8977 | 0.8974 |
| 0.1484 | 24.48 | 7000 | 0.3491 | 0.8866 | 0.8860 |
| 0.1485 | 25.17 | 7200 | 0.3209 | 0.8927 | 0.8924 |
| 0.1447 | 25.87 | 7400 | 0.3323 | 0.8974 | 0.8970 |
| 0.1428 | 26.57 | 7600 | 0.3301 | 0.8965 | 0.8961 |
| 0.1394 | 27.27 | 7800 | 0.3386 | 0.8909 | 0.8904 |
| 0.1414 | 27.97 | 8000 | 0.3319 | 0.8981 | 0.8976 |
| 0.1398 | 28.67 | 8200 | 0.3338 | 0.8970 | 0.8965 |
| 0.1346 | 29.37 | 8400 | 0.3304 | 0.8972 | 0.8968 |
| 0.1327 | 30.07 | 8600 | 0.3334 | 0.8965 | 0.8961 |
| 0.1325 | 30.77 | 8800 | 0.3449 | 0.8931 | 0.8926 |
| 0.1338 | 31.47 | 9000 | 0.3320 | 0.8946 | 0.8941 |
| 0.1282 | 32.17 | 9200 | 0.3409 | 0.8963 | 0.8959 |
| 0.1276 | 32.87 | 9400 | 0.3403 | 0.8963 | 0.8959 |
| 0.1266 | 33.57 | 9600 | 0.3470 | 0.8940 | 0.8935 |
| 0.1238 | 34.27 | 9800 | 0.3421 | 0.8954 | 0.8950 |
| 0.1257 | 34.97 | 10000 | 0.3406 | 0.8959 | 0.8954 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:33:38+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_splice\_reconstructed-seqsight\_4096\_512\_46M-L32\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2844
* F1 Score: 0.9056
* Accuracy: 0.9053
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3687
- F1 Score: 0.8338
- Accuracy: 0.834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5178 | 0.79 | 200 | 0.4899 | 0.7461 | 0.746 |
| 0.4791 | 1.58 | 400 | 0.4700 | 0.7734 | 0.774 |
| 0.4718 | 2.37 | 600 | 0.4727 | 0.7702 | 0.771 |
| 0.4615 | 3.16 | 800 | 0.4697 | 0.766 | 0.766 |
| 0.4548 | 3.95 | 1000 | 0.4594 | 0.7833 | 0.784 |
| 0.4524 | 4.74 | 1200 | 0.4634 | 0.7769 | 0.777 |
| 0.453 | 5.53 | 1400 | 0.4565 | 0.7840 | 0.784 |
| 0.442 | 6.32 | 1600 | 0.4560 | 0.7830 | 0.783 |
| 0.447 | 7.11 | 1800 | 0.4591 | 0.7861 | 0.786 |
| 0.4436 | 7.91 | 2000 | 0.4556 | 0.7901 | 0.79 |
| 0.4384 | 8.7 | 2200 | 0.4544 | 0.7820 | 0.782 |
| 0.4351 | 9.49 | 2400 | 0.4534 | 0.7776 | 0.778 |
| 0.4356 | 10.28 | 2600 | 0.4704 | 0.7827 | 0.783 |
| 0.4329 | 11.07 | 2800 | 0.4588 | 0.7940 | 0.794 |
| 0.4323 | 11.86 | 3000 | 0.4618 | 0.7840 | 0.784 |
| 0.4306 | 12.65 | 3200 | 0.4583 | 0.7931 | 0.793 |
| 0.4259 | 13.44 | 3400 | 0.4587 | 0.7890 | 0.789 |
| 0.4277 | 14.23 | 3600 | 0.4532 | 0.7880 | 0.788 |
| 0.4276 | 15.02 | 3800 | 0.4550 | 0.7891 | 0.789 |
| 0.4277 | 15.81 | 4000 | 0.4567 | 0.7920 | 0.792 |
| 0.4216 | 16.6 | 4200 | 0.4542 | 0.7881 | 0.788 |
| 0.4247 | 17.39 | 4400 | 0.4598 | 0.7850 | 0.785 |
| 0.4213 | 18.18 | 4600 | 0.4613 | 0.7820 | 0.782 |
| 0.4207 | 18.97 | 4800 | 0.4634 | 0.7819 | 0.782 |
| 0.4256 | 19.76 | 5000 | 0.4523 | 0.7821 | 0.782 |
| 0.4206 | 20.55 | 5200 | 0.4507 | 0.7878 | 0.788 |
| 0.4201 | 21.34 | 5400 | 0.4563 | 0.7811 | 0.781 |
| 0.4175 | 22.13 | 5600 | 0.4561 | 0.7851 | 0.785 |
| 0.42 | 22.92 | 5800 | 0.4546 | 0.7821 | 0.782 |
| 0.4129 | 23.72 | 6000 | 0.4567 | 0.7861 | 0.786 |
| 0.4176 | 24.51 | 6200 | 0.4500 | 0.7890 | 0.789 |
| 0.4193 | 25.3 | 6400 | 0.4516 | 0.7931 | 0.793 |
| 0.4131 | 26.09 | 6600 | 0.4523 | 0.7820 | 0.782 |
| 0.4134 | 26.88 | 6800 | 0.4575 | 0.7790 | 0.779 |
| 0.4129 | 27.67 | 7000 | 0.4518 | 0.7889 | 0.789 |
| 0.4116 | 28.46 | 7200 | 0.4501 | 0.7849 | 0.785 |
| 0.4146 | 29.25 | 7400 | 0.4514 | 0.7930 | 0.793 |
| 0.4124 | 30.04 | 7600 | 0.4532 | 0.7860 | 0.786 |
| 0.4111 | 30.83 | 7800 | 0.4543 | 0.7831 | 0.783 |
| 0.4116 | 31.62 | 8000 | 0.4488 | 0.7859 | 0.786 |
| 0.4105 | 32.41 | 8200 | 0.4534 | 0.7830 | 0.783 |
| 0.4065 | 33.2 | 8400 | 0.4553 | 0.7860 | 0.786 |
| 0.4106 | 33.99 | 8600 | 0.4539 | 0.7880 | 0.788 |
| 0.4131 | 34.78 | 8800 | 0.4506 | 0.7869 | 0.787 |
| 0.4044 | 35.57 | 9000 | 0.4538 | 0.7870 | 0.787 |
| 0.4084 | 36.36 | 9200 | 0.4561 | 0.7841 | 0.784 |
| 0.4113 | 37.15 | 9400 | 0.4573 | 0.7810 | 0.781 |
| 0.4055 | 37.94 | 9600 | 0.4545 | 0.7870 | 0.787 |
| 0.4065 | 38.74 | 9800 | 0.4552 | 0.7870 | 0.787 |
| 0.4087 | 39.53 | 10000 | 0.4548 | 0.7870 | 0.787 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_0-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:34:25+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_0-seqsight\_4096\_512\_46M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3687
* F1 Score: 0.8338
* Accuracy: 0.834
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3756
- F1 Score: 0.8367
- Accuracy: 0.837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5044 | 0.79 | 200 | 0.4765 | 0.7625 | 0.763 |
| 0.4678 | 1.58 | 400 | 0.4608 | 0.7760 | 0.777 |
| 0.4623 | 2.37 | 600 | 0.4678 | 0.7758 | 0.777 |
| 0.453 | 3.16 | 800 | 0.4604 | 0.7780 | 0.778 |
| 0.4461 | 3.95 | 1000 | 0.4543 | 0.7860 | 0.787 |
| 0.443 | 4.74 | 1200 | 0.4557 | 0.7826 | 0.783 |
| 0.4427 | 5.53 | 1400 | 0.4569 | 0.7880 | 0.788 |
| 0.4339 | 6.32 | 1600 | 0.4522 | 0.7961 | 0.796 |
| 0.4377 | 7.11 | 1800 | 0.4591 | 0.7889 | 0.789 |
| 0.4332 | 7.91 | 2000 | 0.4540 | 0.7950 | 0.795 |
| 0.4281 | 8.7 | 2200 | 0.4491 | 0.7889 | 0.789 |
| 0.4232 | 9.49 | 2400 | 0.4480 | 0.7898 | 0.79 |
| 0.4246 | 10.28 | 2600 | 0.4617 | 0.7928 | 0.793 |
| 0.4213 | 11.07 | 2800 | 0.4589 | 0.7959 | 0.796 |
| 0.4184 | 11.86 | 3000 | 0.4577 | 0.802 | 0.802 |
| 0.4163 | 12.65 | 3200 | 0.4438 | 0.7989 | 0.799 |
| 0.4113 | 13.44 | 3400 | 0.4578 | 0.7910 | 0.791 |
| 0.4123 | 14.23 | 3600 | 0.4488 | 0.7970 | 0.797 |
| 0.413 | 15.02 | 3800 | 0.4524 | 0.8030 | 0.803 |
| 0.4097 | 15.81 | 4000 | 0.4458 | 0.8031 | 0.803 |
| 0.4061 | 16.6 | 4200 | 0.4506 | 0.7931 | 0.793 |
| 0.4057 | 17.39 | 4400 | 0.4582 | 0.8010 | 0.801 |
| 0.4026 | 18.18 | 4600 | 0.4506 | 0.7940 | 0.794 |
| 0.4017 | 18.97 | 4800 | 0.4537 | 0.8030 | 0.803 |
| 0.4047 | 19.76 | 5000 | 0.4480 | 0.7930 | 0.793 |
| 0.4006 | 20.55 | 5200 | 0.4479 | 0.7959 | 0.796 |
| 0.3985 | 21.34 | 5400 | 0.4518 | 0.7920 | 0.792 |
| 0.3954 | 22.13 | 5600 | 0.4491 | 0.8031 | 0.803 |
| 0.3984 | 22.92 | 5800 | 0.4509 | 0.7981 | 0.798 |
| 0.3916 | 23.72 | 6000 | 0.4593 | 0.7910 | 0.791 |
| 0.3957 | 24.51 | 6200 | 0.4412 | 0.8000 | 0.8 |
| 0.3965 | 25.3 | 6400 | 0.4484 | 0.7930 | 0.793 |
| 0.3879 | 26.09 | 6600 | 0.4487 | 0.7859 | 0.786 |
| 0.3899 | 26.88 | 6800 | 0.4514 | 0.7890 | 0.789 |
| 0.3879 | 27.67 | 7000 | 0.4503 | 0.7930 | 0.793 |
| 0.3868 | 28.46 | 7200 | 0.4499 | 0.786 | 0.786 |
| 0.3886 | 29.25 | 7400 | 0.4479 | 0.7950 | 0.795 |
| 0.3864 | 30.04 | 7600 | 0.4540 | 0.7950 | 0.795 |
| 0.3848 | 30.83 | 7800 | 0.4525 | 0.7861 | 0.786 |
| 0.3853 | 31.62 | 8000 | 0.4460 | 0.7929 | 0.793 |
| 0.3846 | 32.41 | 8200 | 0.4461 | 0.7930 | 0.793 |
| 0.3812 | 33.2 | 8400 | 0.4534 | 0.7891 | 0.789 |
| 0.3829 | 33.99 | 8600 | 0.4512 | 0.7940 | 0.794 |
| 0.3851 | 34.78 | 8800 | 0.4467 | 0.7920 | 0.792 |
| 0.3764 | 35.57 | 9000 | 0.4498 | 0.7920 | 0.792 |
| 0.3816 | 36.36 | 9200 | 0.4533 | 0.7931 | 0.793 |
| 0.3819 | 37.15 | 9400 | 0.4538 | 0.7930 | 0.793 |
| 0.378 | 37.94 | 9600 | 0.4510 | 0.7891 | 0.789 |
| 0.3787 | 38.74 | 9800 | 0.4517 | 0.7901 | 0.79 |
| 0.3822 | 39.53 | 10000 | 0.4516 | 0.7891 | 0.789 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_0-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:34:33+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_0-seqsight\_4096\_512\_46M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3756
* F1 Score: 0.8367
* Accuracy: 0.837
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llava-1.5-7b-hf-ft-mix-vsft
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-ft-mix-vsft", "results": []}]} | Salmoli/llava-1.5-7b-hf-ft-mix-vsft | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:llava-hf/llava-1.5-7b-hf",
"region:us"
] | null | 2024-04-27T00:38:53+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #region-us
|
# llava-1.5-7b-hf-ft-mix-vsft
This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1 | [
"# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #region-us \n",
"# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3707
- F1 Score: 0.8345
- Accuracy: 0.835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4935 | 0.79 | 200 | 0.4703 | 0.7629 | 0.763 |
| 0.4608 | 1.58 | 400 | 0.4577 | 0.7818 | 0.782 |
| 0.4543 | 2.37 | 600 | 0.4608 | 0.7760 | 0.777 |
| 0.4441 | 3.16 | 800 | 0.4574 | 0.784 | 0.784 |
| 0.4378 | 3.95 | 1000 | 0.4475 | 0.7842 | 0.785 |
| 0.4322 | 4.74 | 1200 | 0.4566 | 0.7828 | 0.783 |
| 0.4326 | 5.53 | 1400 | 0.4554 | 0.7880 | 0.788 |
| 0.423 | 6.32 | 1600 | 0.4503 | 0.7901 | 0.79 |
| 0.4232 | 7.11 | 1800 | 0.4613 | 0.7920 | 0.792 |
| 0.4183 | 7.91 | 2000 | 0.4526 | 0.7910 | 0.791 |
| 0.4111 | 8.7 | 2200 | 0.4646 | 0.7780 | 0.778 |
| 0.4044 | 9.49 | 2400 | 0.4524 | 0.7880 | 0.788 |
| 0.407 | 10.28 | 2600 | 0.4713 | 0.7888 | 0.789 |
| 0.4011 | 11.07 | 2800 | 0.4569 | 0.7950 | 0.795 |
| 0.3959 | 11.86 | 3000 | 0.4624 | 0.7961 | 0.796 |
| 0.3912 | 12.65 | 3200 | 0.4450 | 0.7900 | 0.79 |
| 0.3851 | 13.44 | 3400 | 0.4656 | 0.7860 | 0.786 |
| 0.3828 | 14.23 | 3600 | 0.4627 | 0.7831 | 0.783 |
| 0.385 | 15.02 | 3800 | 0.4619 | 0.7840 | 0.784 |
| 0.3776 | 15.81 | 4000 | 0.4544 | 0.7890 | 0.789 |
| 0.3715 | 16.6 | 4200 | 0.4675 | 0.7900 | 0.79 |
| 0.3724 | 17.39 | 4400 | 0.4740 | 0.7829 | 0.783 |
| 0.3647 | 18.18 | 4600 | 0.4801 | 0.7770 | 0.777 |
| 0.3629 | 18.97 | 4800 | 0.4775 | 0.7906 | 0.791 |
| 0.3614 | 19.76 | 5000 | 0.4744 | 0.7791 | 0.779 |
| 0.3561 | 20.55 | 5200 | 0.4928 | 0.7791 | 0.779 |
| 0.3515 | 21.34 | 5400 | 0.4892 | 0.7790 | 0.779 |
| 0.3462 | 22.13 | 5600 | 0.4900 | 0.784 | 0.784 |
| 0.3465 | 22.92 | 5800 | 0.5117 | 0.77 | 0.77 |
| 0.3425 | 23.72 | 6000 | 0.5007 | 0.7827 | 0.783 |
| 0.3392 | 24.51 | 6200 | 0.4958 | 0.77 | 0.77 |
| 0.3388 | 25.3 | 6400 | 0.5055 | 0.7801 | 0.78 |
| 0.3305 | 26.09 | 6600 | 0.5099 | 0.7690 | 0.769 |
| 0.3297 | 26.88 | 6800 | 0.5005 | 0.7749 | 0.775 |
| 0.3299 | 27.67 | 7000 | 0.5162 | 0.7739 | 0.774 |
| 0.3221 | 28.46 | 7200 | 0.5418 | 0.7690 | 0.769 |
| 0.325 | 29.25 | 7400 | 0.5136 | 0.7720 | 0.772 |
| 0.3218 | 30.04 | 7600 | 0.5461 | 0.7697 | 0.77 |
| 0.3187 | 30.83 | 7800 | 0.5346 | 0.7718 | 0.772 |
| 0.3196 | 31.62 | 8000 | 0.5273 | 0.7721 | 0.772 |
| 0.3159 | 32.41 | 8200 | 0.5496 | 0.7727 | 0.773 |
| 0.3117 | 33.2 | 8400 | 0.5563 | 0.7758 | 0.776 |
| 0.3111 | 33.99 | 8600 | 0.5353 | 0.7729 | 0.773 |
| 0.315 | 34.78 | 8800 | 0.5371 | 0.7750 | 0.775 |
| 0.3025 | 35.57 | 9000 | 0.5510 | 0.7729 | 0.773 |
| 0.3085 | 36.36 | 9200 | 0.5592 | 0.7667 | 0.767 |
| 0.3092 | 37.15 | 9400 | 0.5583 | 0.7725 | 0.773 |
| 0.3065 | 37.94 | 9600 | 0.5517 | 0.7698 | 0.77 |
| 0.3047 | 38.74 | 9800 | 0.5561 | 0.7698 | 0.77 |
| 0.3058 | 39.53 | 10000 | 0.5579 | 0.7687 | 0.769 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_0-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:40:49+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_0-seqsight\_4096\_512\_46M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3707
* F1 Score: 0.8345
* Accuracy: 0.835
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3440
- F1 Score: 0.8597
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5363 | 0.83 | 200 | 0.5327 | 0.7246 | 0.725 |
| 0.5001 | 1.67 | 400 | 0.5163 | 0.7348 | 0.735 |
| 0.4875 | 2.5 | 600 | 0.5116 | 0.7419 | 0.742 |
| 0.4785 | 3.33 | 800 | 0.5076 | 0.7285 | 0.729 |
| 0.4821 | 4.17 | 1000 | 0.4986 | 0.7499 | 0.75 |
| 0.4692 | 5.0 | 1200 | 0.5033 | 0.7500 | 0.75 |
| 0.4691 | 5.83 | 1400 | 0.4966 | 0.7480 | 0.748 |
| 0.4676 | 6.67 | 1600 | 0.5014 | 0.7538 | 0.754 |
| 0.4672 | 7.5 | 1800 | 0.4977 | 0.7529 | 0.753 |
| 0.4584 | 8.33 | 2000 | 0.4960 | 0.7569 | 0.757 |
| 0.4615 | 9.17 | 2200 | 0.4971 | 0.7539 | 0.754 |
| 0.4591 | 10.0 | 2400 | 0.5072 | 0.7468 | 0.748 |
| 0.4582 | 10.83 | 2600 | 0.4998 | 0.7505 | 0.751 |
| 0.4564 | 11.67 | 2800 | 0.5020 | 0.7440 | 0.745 |
| 0.4478 | 12.5 | 3000 | 0.5042 | 0.7483 | 0.749 |
| 0.459 | 13.33 | 3200 | 0.4915 | 0.7505 | 0.751 |
| 0.4477 | 14.17 | 3400 | 0.4921 | 0.7529 | 0.753 |
| 0.4504 | 15.0 | 3600 | 0.5029 | 0.7430 | 0.744 |
| 0.448 | 15.83 | 3800 | 0.4952 | 0.7508 | 0.751 |
| 0.4509 | 16.67 | 4000 | 0.4993 | 0.7505 | 0.751 |
| 0.445 | 17.5 | 4200 | 0.4965 | 0.7544 | 0.755 |
| 0.4414 | 18.33 | 4400 | 0.5011 | 0.7589 | 0.759 |
| 0.4473 | 19.17 | 4600 | 0.4987 | 0.7505 | 0.751 |
| 0.4435 | 20.0 | 4800 | 0.4937 | 0.7538 | 0.754 |
| 0.4454 | 20.83 | 5000 | 0.4961 | 0.7518 | 0.752 |
| 0.438 | 21.67 | 5200 | 0.4953 | 0.7468 | 0.747 |
| 0.4404 | 22.5 | 5400 | 0.4954 | 0.754 | 0.754 |
| 0.439 | 23.33 | 5600 | 0.4940 | 0.7546 | 0.755 |
| 0.441 | 24.17 | 5800 | 0.4978 | 0.7518 | 0.752 |
| 0.4402 | 25.0 | 6000 | 0.4925 | 0.7490 | 0.749 |
| 0.4375 | 25.83 | 6200 | 0.4923 | 0.7499 | 0.75 |
| 0.4352 | 26.67 | 6400 | 0.4912 | 0.7489 | 0.749 |
| 0.437 | 27.5 | 6600 | 0.4953 | 0.7540 | 0.754 |
| 0.4358 | 28.33 | 6800 | 0.4912 | 0.7510 | 0.751 |
| 0.4317 | 29.17 | 7000 | 0.4930 | 0.7490 | 0.749 |
| 0.4363 | 30.0 | 7200 | 0.4909 | 0.7559 | 0.756 |
| 0.4326 | 30.83 | 7400 | 0.4944 | 0.7458 | 0.746 |
| 0.431 | 31.67 | 7600 | 0.4911 | 0.7520 | 0.752 |
| 0.4318 | 32.5 | 7800 | 0.4921 | 0.7510 | 0.751 |
| 0.4303 | 33.33 | 8000 | 0.4905 | 0.7530 | 0.753 |
| 0.4334 | 34.17 | 8200 | 0.4898 | 0.7500 | 0.75 |
| 0.427 | 35.0 | 8400 | 0.4901 | 0.7490 | 0.749 |
| 0.4301 | 35.83 | 8600 | 0.4907 | 0.7500 | 0.75 |
| 0.4324 | 36.67 | 8800 | 0.4899 | 0.7540 | 0.754 |
| 0.427 | 37.5 | 9000 | 0.4934 | 0.7468 | 0.747 |
| 0.4287 | 38.33 | 9200 | 0.4915 | 0.7500 | 0.75 |
| 0.4273 | 39.17 | 9400 | 0.4921 | 0.748 | 0.748 |
| 0.4298 | 40.0 | 9600 | 0.4915 | 0.7520 | 0.752 |
| 0.4284 | 40.83 | 9800 | 0.4911 | 0.7480 | 0.748 |
| 0.4278 | 41.67 | 10000 | 0.4909 | 0.7490 | 0.749 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_1-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:41:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_1-seqsight\_4096\_512\_46M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3440
* F1 Score: 0.8597
* Accuracy: 0.86
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null | sailboat | {"license": "apache-2.0"} | ikukou/sailboat | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-04-27T00:43:12+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| sailboat | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold5
This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1456
- Accuracy: 0.6720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1558 | 1.0 | 924 | 1.1699 | 0.6023 |
| 0.9755 | 2.0 | 1848 | 0.9954 | 0.6522 |
| 0.7954 | 3.0 | 2772 | 0.9696 | 0.6753 |
| 0.5362 | 4.0 | 3696 | 0.9918 | 0.6682 |
| 0.5694 | 5.0 | 4620 | 0.9924 | 0.6712 |
| 0.6743 | 6.0 | 5544 | 1.0447 | 0.6687 |
| 0.5077 | 7.0 | 6468 | 1.0595 | 0.6717 |
| 0.368 | 8.0 | 7392 | 1.1004 | 0.6717 |
| 0.2622 | 9.0 | 8316 | 1.1347 | 0.6709 |
| 0.267 | 10.0 | 9240 | 1.1456 | 0.6720 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-large-patch4-window7-224", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold5", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6719978313906207, "name": "Accuracy"}]}]}]} | onizukal/Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold5 | null | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-large-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T00:44:34+00:00 | [] | [] | TAGS
#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-large-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| Boya1\_RMSProp\_1-e5\_10Epoch\_swin-large-patch4-window7-224\_fold5
===================================================================
This model is a fine-tuned version of microsoft/swin-large-patch4-window7-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1456
* Accuracy: 0.6720
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.35.0
* Pytorch 2.1.0
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-large-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_SPOAL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_SPOAL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_SPOAL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T00:45:10+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_train_Instruction0_SPOAL_v1
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_train_Instruction0_SPOAL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_train_Instruction0_SPOAL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3395
- F1 Score: 0.8587
- Accuracy: 0.859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5242 | 0.83 | 200 | 0.5228 | 0.7387 | 0.74 |
| 0.4863 | 1.67 | 400 | 0.5057 | 0.7428 | 0.743 |
| 0.4772 | 2.5 | 600 | 0.4998 | 0.7559 | 0.756 |
| 0.4696 | 3.33 | 800 | 0.4954 | 0.7580 | 0.758 |
| 0.4708 | 4.17 | 1000 | 0.4919 | 0.7550 | 0.755 |
| 0.4593 | 5.0 | 1200 | 0.4989 | 0.7537 | 0.754 |
| 0.4586 | 5.83 | 1400 | 0.4896 | 0.7590 | 0.759 |
| 0.4549 | 6.67 | 1600 | 0.4875 | 0.7630 | 0.763 |
| 0.4559 | 7.5 | 1800 | 0.4899 | 0.7618 | 0.762 |
| 0.4469 | 8.33 | 2000 | 0.4868 | 0.7640 | 0.764 |
| 0.4491 | 9.17 | 2200 | 0.4838 | 0.7610 | 0.761 |
| 0.445 | 10.0 | 2400 | 0.5029 | 0.7458 | 0.747 |
| 0.4443 | 10.83 | 2600 | 0.4904 | 0.7595 | 0.76 |
| 0.4407 | 11.67 | 2800 | 0.4849 | 0.7636 | 0.764 |
| 0.4315 | 12.5 | 3000 | 0.4869 | 0.7516 | 0.752 |
| 0.4437 | 13.33 | 3200 | 0.4789 | 0.7668 | 0.767 |
| 0.4306 | 14.17 | 3400 | 0.4806 | 0.7607 | 0.761 |
| 0.4323 | 15.0 | 3600 | 0.4877 | 0.7603 | 0.761 |
| 0.4293 | 15.83 | 3800 | 0.4833 | 0.7558 | 0.756 |
| 0.4302 | 16.67 | 4000 | 0.4819 | 0.7590 | 0.759 |
| 0.4254 | 17.5 | 4200 | 0.4873 | 0.7620 | 0.763 |
| 0.4196 | 18.33 | 4400 | 0.4848 | 0.7600 | 0.76 |
| 0.4254 | 19.17 | 4600 | 0.4872 | 0.7545 | 0.755 |
| 0.4204 | 20.0 | 4800 | 0.4785 | 0.7630 | 0.763 |
| 0.4233 | 20.83 | 5000 | 0.4829 | 0.7719 | 0.772 |
| 0.4157 | 21.67 | 5200 | 0.4803 | 0.7618 | 0.762 |
| 0.4151 | 22.5 | 5400 | 0.4865 | 0.7609 | 0.761 |
| 0.4151 | 23.33 | 5600 | 0.4885 | 0.7528 | 0.754 |
| 0.4161 | 24.17 | 5800 | 0.4884 | 0.7557 | 0.756 |
| 0.4137 | 25.0 | 6000 | 0.4851 | 0.7590 | 0.759 |
| 0.411 | 25.83 | 6200 | 0.4834 | 0.7550 | 0.755 |
| 0.4078 | 26.67 | 6400 | 0.4848 | 0.7610 | 0.761 |
| 0.4098 | 27.5 | 6600 | 0.4882 | 0.7580 | 0.758 |
| 0.4089 | 28.33 | 6800 | 0.4862 | 0.7630 | 0.763 |
| 0.4039 | 29.17 | 7000 | 0.4860 | 0.7609 | 0.761 |
| 0.4091 | 30.0 | 7200 | 0.4843 | 0.7589 | 0.759 |
| 0.4036 | 30.83 | 7400 | 0.4903 | 0.7536 | 0.754 |
| 0.4007 | 31.67 | 7600 | 0.4874 | 0.762 | 0.762 |
| 0.4024 | 32.5 | 7800 | 0.4933 | 0.7570 | 0.757 |
| 0.4003 | 33.33 | 8000 | 0.4869 | 0.7589 | 0.759 |
| 0.4038 | 34.17 | 8200 | 0.4885 | 0.7569 | 0.757 |
| 0.3966 | 35.0 | 8400 | 0.4872 | 0.7630 | 0.763 |
| 0.3992 | 35.83 | 8600 | 0.4895 | 0.7549 | 0.755 |
| 0.4022 | 36.67 | 8800 | 0.4880 | 0.7559 | 0.756 |
| 0.3953 | 37.5 | 9000 | 0.4931 | 0.7485 | 0.749 |
| 0.3971 | 38.33 | 9200 | 0.4926 | 0.7580 | 0.758 |
| 0.3958 | 39.17 | 9400 | 0.4919 | 0.7599 | 0.76 |
| 0.3969 | 40.0 | 9600 | 0.4915 | 0.7559 | 0.756 |
| 0.3966 | 40.83 | 9800 | 0.4915 | 0.7640 | 0.764 |
| 0.3956 | 41.67 | 10000 | 0.4909 | 0.7579 | 0.758 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_1-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:48:37+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_1-seqsight\_4096\_512\_46M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3395
* F1 Score: 0.8587
* Accuracy: 0.859
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3578
- F1 Score: 0.8591
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5178 | 0.83 | 200 | 0.5181 | 0.7554 | 0.756 |
| 0.4819 | 1.67 | 400 | 0.5070 | 0.7479 | 0.748 |
| 0.4721 | 2.5 | 600 | 0.5003 | 0.7550 | 0.755 |
| 0.4633 | 3.33 | 800 | 0.4913 | 0.7500 | 0.75 |
| 0.4611 | 4.17 | 1000 | 0.4880 | 0.7457 | 0.746 |
| 0.451 | 5.0 | 1200 | 0.4936 | 0.7599 | 0.76 |
| 0.4455 | 5.83 | 1400 | 0.4886 | 0.7550 | 0.755 |
| 0.4398 | 6.67 | 1600 | 0.4827 | 0.7540 | 0.754 |
| 0.4403 | 7.5 | 1800 | 0.4871 | 0.7565 | 0.757 |
| 0.4285 | 8.33 | 2000 | 0.4859 | 0.7600 | 0.76 |
| 0.4291 | 9.17 | 2200 | 0.4824 | 0.756 | 0.756 |
| 0.4231 | 10.0 | 2400 | 0.4961 | 0.7518 | 0.753 |
| 0.4183 | 10.83 | 2600 | 0.4858 | 0.7586 | 0.759 |
| 0.4125 | 11.67 | 2800 | 0.4818 | 0.7580 | 0.758 |
| 0.4046 | 12.5 | 3000 | 0.4903 | 0.7595 | 0.76 |
| 0.4113 | 13.33 | 3200 | 0.4825 | 0.7658 | 0.766 |
| 0.3989 | 14.17 | 3400 | 0.4853 | 0.7730 | 0.773 |
| 0.3986 | 15.0 | 3600 | 0.4928 | 0.7634 | 0.764 |
| 0.3925 | 15.83 | 3800 | 0.4894 | 0.7678 | 0.768 |
| 0.3898 | 16.67 | 4000 | 0.4929 | 0.764 | 0.764 |
| 0.386 | 17.5 | 4200 | 0.4984 | 0.7664 | 0.767 |
| 0.374 | 18.33 | 4400 | 0.5094 | 0.7630 | 0.763 |
| 0.3786 | 19.17 | 4600 | 0.5111 | 0.7638 | 0.764 |
| 0.374 | 20.0 | 4800 | 0.5076 | 0.7560 | 0.756 |
| 0.3709 | 20.83 | 5000 | 0.5112 | 0.7650 | 0.765 |
| 0.3616 | 21.67 | 5200 | 0.5153 | 0.7609 | 0.761 |
| 0.3613 | 22.5 | 5400 | 0.5168 | 0.7610 | 0.761 |
| 0.3574 | 23.33 | 5600 | 0.5195 | 0.7615 | 0.762 |
| 0.3574 | 24.17 | 5800 | 0.5281 | 0.7595 | 0.76 |
| 0.352 | 25.0 | 6000 | 0.5211 | 0.7670 | 0.767 |
| 0.3476 | 25.83 | 6200 | 0.5293 | 0.7609 | 0.761 |
| 0.3458 | 26.67 | 6400 | 0.5182 | 0.7620 | 0.762 |
| 0.3432 | 27.5 | 6600 | 0.5400 | 0.7570 | 0.757 |
| 0.3399 | 28.33 | 6800 | 0.5396 | 0.7500 | 0.75 |
| 0.3329 | 29.17 | 7000 | 0.5313 | 0.7558 | 0.756 |
| 0.3357 | 30.0 | 7200 | 0.5416 | 0.7589 | 0.759 |
| 0.3301 | 30.83 | 7400 | 0.5557 | 0.7504 | 0.751 |
| 0.3245 | 31.67 | 7600 | 0.5575 | 0.7578 | 0.758 |
| 0.328 | 32.5 | 7800 | 0.5649 | 0.7580 | 0.758 |
| 0.3233 | 33.33 | 8000 | 0.5662 | 0.7546 | 0.755 |
| 0.33 | 34.17 | 8200 | 0.5604 | 0.7508 | 0.751 |
| 0.3193 | 35.0 | 8400 | 0.5548 | 0.7619 | 0.762 |
| 0.3212 | 35.83 | 8600 | 0.5584 | 0.7539 | 0.754 |
| 0.3185 | 36.67 | 8800 | 0.5635 | 0.7489 | 0.749 |
| 0.3152 | 37.5 | 9000 | 0.5705 | 0.7573 | 0.758 |
| 0.3089 | 38.33 | 9200 | 0.5811 | 0.7539 | 0.754 |
| 0.3108 | 39.17 | 9400 | 0.5683 | 0.7578 | 0.758 |
| 0.3144 | 40.0 | 9600 | 0.5730 | 0.7547 | 0.755 |
| 0.3124 | 40.83 | 9800 | 0.5736 | 0.7560 | 0.756 |
| 0.31 | 41.67 | 10000 | 0.5746 | 0.7518 | 0.752 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_1-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:48:37+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_1-seqsight\_4096\_512\_46M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3578
* F1 Score: 0.8591
* Accuracy: 0.86
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3559
- F1 Score: 0.8462
- Accuracy: 0.847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5287 | 1.34 | 200 | 0.5089 | 0.7376 | 0.738 |
| 0.4796 | 2.68 | 400 | 0.4835 | 0.7698 | 0.77 |
| 0.4591 | 4.03 | 600 | 0.4711 | 0.7733 | 0.774 |
| 0.4496 | 5.37 | 800 | 0.4665 | 0.7789 | 0.779 |
| 0.4341 | 6.71 | 1000 | 0.4513 | 0.7908 | 0.791 |
| 0.4287 | 8.05 | 1200 | 0.4501 | 0.7888 | 0.789 |
| 0.4239 | 9.4 | 1400 | 0.4484 | 0.7960 | 0.796 |
| 0.419 | 10.74 | 1600 | 0.4484 | 0.8027 | 0.803 |
| 0.4107 | 12.08 | 1800 | 0.4510 | 0.7993 | 0.8 |
| 0.4106 | 13.42 | 2000 | 0.4351 | 0.8010 | 0.801 |
| 0.3995 | 14.77 | 2200 | 0.4371 | 0.7987 | 0.799 |
| 0.4025 | 16.11 | 2400 | 0.4327 | 0.8100 | 0.81 |
| 0.3956 | 17.45 | 2600 | 0.4302 | 0.8110 | 0.811 |
| 0.3907 | 18.79 | 2800 | 0.4324 | 0.8090 | 0.809 |
| 0.394 | 20.13 | 3000 | 0.4252 | 0.8099 | 0.81 |
| 0.3851 | 21.48 | 3200 | 0.4278 | 0.8180 | 0.818 |
| 0.3797 | 22.82 | 3400 | 0.4284 | 0.8117 | 0.812 |
| 0.3805 | 24.16 | 3600 | 0.4223 | 0.8129 | 0.813 |
| 0.3756 | 25.5 | 3800 | 0.4274 | 0.8095 | 0.81 |
| 0.3731 | 26.85 | 4000 | 0.4213 | 0.8150 | 0.815 |
| 0.3697 | 28.19 | 4200 | 0.4202 | 0.8110 | 0.811 |
| 0.3623 | 29.53 | 4400 | 0.4219 | 0.8150 | 0.815 |
| 0.3626 | 30.87 | 4600 | 0.4158 | 0.8130 | 0.813 |
| 0.3615 | 32.21 | 4800 | 0.4192 | 0.8260 | 0.826 |
| 0.358 | 33.56 | 5000 | 0.4179 | 0.8159 | 0.816 |
| 0.3587 | 34.9 | 5200 | 0.4164 | 0.8140 | 0.814 |
| 0.3535 | 36.24 | 5400 | 0.4185 | 0.8199 | 0.82 |
| 0.3502 | 37.58 | 5600 | 0.4194 | 0.8189 | 0.819 |
| 0.3495 | 38.93 | 5800 | 0.4166 | 0.8150 | 0.815 |
| 0.3444 | 40.27 | 6000 | 0.4211 | 0.8140 | 0.814 |
| 0.3449 | 41.61 | 6200 | 0.4179 | 0.8190 | 0.819 |
| 0.3427 | 42.95 | 6400 | 0.4192 | 0.8160 | 0.816 |
| 0.3455 | 44.3 | 6600 | 0.4171 | 0.8198 | 0.82 |
| 0.3345 | 45.64 | 6800 | 0.4180 | 0.8180 | 0.818 |
| 0.338 | 46.98 | 7000 | 0.4168 | 0.8149 | 0.815 |
| 0.3369 | 48.32 | 7200 | 0.4144 | 0.8110 | 0.811 |
| 0.3356 | 49.66 | 7400 | 0.4122 | 0.8210 | 0.821 |
| 0.3338 | 51.01 | 7600 | 0.4153 | 0.8189 | 0.819 |
| 0.3316 | 52.35 | 7800 | 0.4163 | 0.8166 | 0.817 |
| 0.3275 | 53.69 | 8000 | 0.4147 | 0.8200 | 0.82 |
| 0.3327 | 55.03 | 8200 | 0.4122 | 0.8180 | 0.818 |
| 0.3313 | 56.38 | 8400 | 0.4119 | 0.8198 | 0.82 |
| 0.3263 | 57.72 | 8600 | 0.4142 | 0.8228 | 0.823 |
| 0.3257 | 59.06 | 8800 | 0.4120 | 0.8179 | 0.818 |
| 0.3257 | 60.4 | 9000 | 0.4129 | 0.8180 | 0.818 |
| 0.3227 | 61.74 | 9200 | 0.4148 | 0.8187 | 0.819 |
| 0.3229 | 63.09 | 9400 | 0.4128 | 0.8219 | 0.822 |
| 0.3251 | 64.43 | 9600 | 0.4119 | 0.8209 | 0.821 |
| 0.3206 | 65.77 | 9800 | 0.4119 | 0.8199 | 0.82 |
| 0.3228 | 67.11 | 10000 | 0.4122 | 0.8199 | 0.82 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_4-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:54:41+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_4-seqsight\_4096\_512\_46M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3559
* F1 Score: 0.8462
* Accuracy: 0.847
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3872
- F1 Score: 0.8608
- Accuracy: 0.861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5114 | 1.34 | 200 | 0.4841 | 0.7588 | 0.759 |
| 0.4533 | 2.68 | 400 | 0.4569 | 0.7879 | 0.788 |
| 0.4314 | 4.03 | 600 | 0.4494 | 0.7879 | 0.788 |
| 0.4235 | 5.37 | 800 | 0.4531 | 0.7849 | 0.785 |
| 0.4091 | 6.71 | 1000 | 0.4446 | 0.7892 | 0.79 |
| 0.4027 | 8.05 | 1200 | 0.4379 | 0.8110 | 0.811 |
| 0.3978 | 9.4 | 1400 | 0.4401 | 0.8089 | 0.809 |
| 0.392 | 10.74 | 1600 | 0.4347 | 0.804 | 0.804 |
| 0.382 | 12.08 | 1800 | 0.4428 | 0.8020 | 0.802 |
| 0.3794 | 13.42 | 2000 | 0.4361 | 0.7995 | 0.8 |
| 0.3679 | 14.77 | 2200 | 0.4246 | 0.8059 | 0.806 |
| 0.3698 | 16.11 | 2400 | 0.4268 | 0.8108 | 0.811 |
| 0.3603 | 17.45 | 2600 | 0.4300 | 0.8089 | 0.809 |
| 0.3535 | 18.79 | 2800 | 0.4289 | 0.8220 | 0.822 |
| 0.3563 | 20.13 | 3000 | 0.4140 | 0.8100 | 0.81 |
| 0.3448 | 21.48 | 3200 | 0.4314 | 0.8023 | 0.803 |
| 0.3366 | 22.82 | 3400 | 0.4326 | 0.7985 | 0.799 |
| 0.3365 | 24.16 | 3600 | 0.4185 | 0.8029 | 0.803 |
| 0.3282 | 25.5 | 3800 | 0.4323 | 0.8017 | 0.802 |
| 0.3253 | 26.85 | 4000 | 0.4284 | 0.8069 | 0.807 |
| 0.3204 | 28.19 | 4200 | 0.4233 | 0.8070 | 0.807 |
| 0.311 | 29.53 | 4400 | 0.4352 | 0.8064 | 0.807 |
| 0.3127 | 30.87 | 4600 | 0.4316 | 0.8007 | 0.801 |
| 0.3054 | 32.21 | 4800 | 0.4319 | 0.8050 | 0.805 |
| 0.3041 | 33.56 | 5000 | 0.4301 | 0.8080 | 0.808 |
| 0.2983 | 34.9 | 5200 | 0.4331 | 0.8070 | 0.807 |
| 0.2925 | 36.24 | 5400 | 0.4322 | 0.8126 | 0.813 |
| 0.2899 | 37.58 | 5600 | 0.4373 | 0.808 | 0.808 |
| 0.2883 | 38.93 | 5800 | 0.4288 | 0.8030 | 0.803 |
| 0.2819 | 40.27 | 6000 | 0.4460 | 0.8142 | 0.815 |
| 0.2787 | 41.61 | 6200 | 0.4333 | 0.8130 | 0.813 |
| 0.2745 | 42.95 | 6400 | 0.4375 | 0.8140 | 0.814 |
| 0.2785 | 44.3 | 6600 | 0.4389 | 0.8157 | 0.816 |
| 0.2678 | 45.64 | 6800 | 0.4411 | 0.8140 | 0.814 |
| 0.2698 | 46.98 | 7000 | 0.4366 | 0.8148 | 0.815 |
| 0.2656 | 48.32 | 7200 | 0.4422 | 0.8217 | 0.822 |
| 0.2664 | 49.66 | 7400 | 0.4372 | 0.8218 | 0.822 |
| 0.2633 | 51.01 | 7600 | 0.4431 | 0.8140 | 0.814 |
| 0.2598 | 52.35 | 7800 | 0.4473 | 0.8165 | 0.817 |
| 0.2547 | 53.69 | 8000 | 0.4394 | 0.8200 | 0.82 |
| 0.2555 | 55.03 | 8200 | 0.4385 | 0.822 | 0.822 |
| 0.2551 | 56.38 | 8400 | 0.4433 | 0.8239 | 0.824 |
| 0.2517 | 57.72 | 8600 | 0.4488 | 0.8217 | 0.822 |
| 0.2521 | 59.06 | 8800 | 0.4459 | 0.8187 | 0.819 |
| 0.2506 | 60.4 | 9000 | 0.4476 | 0.8218 | 0.822 |
| 0.2491 | 61.74 | 9200 | 0.4567 | 0.8134 | 0.814 |
| 0.2485 | 63.09 | 9400 | 0.4474 | 0.8228 | 0.823 |
| 0.2474 | 64.43 | 9600 | 0.4485 | 0.8217 | 0.822 |
| 0.2433 | 65.77 | 9800 | 0.4475 | 0.8218 | 0.822 |
| 0.2437 | 67.11 | 10000 | 0.4489 | 0.8218 | 0.822 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_4-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T00:54:42+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_4-seqsight\_4096\_512\_46M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3872
* F1 Score: 0.8608
* Accuracy: 0.861
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_OPSAL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_OPSAL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_OPSAL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T00:56:55+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_train_Instruction0_OPSAL_v1
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_train_Instruction0_OPSAL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_train_Instruction0_OPSAL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model
This model is fine-tuned based on Meta-Llama/Meta-Llama-3-8B instructions via mlx-lm.
**Note:** The glaive-function-calling-v2 dataset contains some invalid JSON and single quotes for the arguments' values. I have re-trained the model based on cleaned-up data. If you encounter issues with the function calling JSON format, you may try this new version here: https://huggingface.co/mzbac/llama-3-8B-Instruct-function-calling-v0.2
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "mzbac/llama-3-8B-Instruct-function-calling"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tool = {
"name": "search_web",
"description": "Perform a web search for a given search terms.",
"parameter": {
"type": "object",
"properties": {
"search_terms": {
"type": "array",
"items": {"type": "string"},
"description": "The search queries for which the search is performed.",
"required": True,
}
}
},
}
messages = [
{
"role": "system",
"content": f"You are a helpful assistant with access to the following functions. Use them if required - {str(tool)}",
},
{"role": "user", "content": "Today's news in Melbourne, just for your information, today is April 27, 2014."},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.1,
)
response = outputs[0]
print(tokenizer.decode(response))
# <|begin_of_text|><|start_header_id|>system<|end_header_id|>
# You are a helpful assistant with access to the following functions. Use them if required - {'name':'search_web', 'description': 'Perform a web search for a given search terms.', 'parameter': {'type': 'object', 'properties': {'search_terms': {'type': 'array', 'items': {'type':'string'}, 'description': 'The search queries for which the search is performed.','required': True}}}}<|eot_id|><|start_header_id|>user<|end_header_id|>
# Today's news in Melbourne, just for your information, today is April 27, 2014.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
# <functioncall> {"name": "search_web", "arguments": '{"search_terms": ["Melbourne news", "April 27, 2014"]}'}<|eot_id|>
```
## Training hyperparameters
lora_config.yaml
```yaml
# The path to the local model directory or Hugging Face repo.
model: "meta-llama/Meta-Llama-3-8B-Instruct"
# Whether or not to train (boolean)
train: true
# Directory with {train, valid, test}.jsonl files
data: "data"
# The PRNG seed
seed: 0
# Number of layers to fine-tune
lora_layers: 32
# Minibatch size.
batch_size: 1
# Iterations to train for.
iters: 6000
# Number of validation batches, -1 uses the entire validation set.
val_batches: 25
# Adam learning rate.
learning_rate: 1e-6
# Number of training steps between loss reporting.
steps_per_report: 10
# Number of training steps between validations.
steps_per_eval: 200
# Load path to resume training with the given adapter weights.
resume_adapter_file: null
# Save/load path for the trained adapter weights.
adapter_path: "adapters"
# Save the model every N iterations.
save_every: 1000
# Evaluate on the test set after training
test: false
# Number of test set batches, -1 uses the entire test set.
test_batches: 100
# Maximum sequence length.
max_seq_length: 8192
# Use gradient checkpointing to reduce memory use.
grad_checkpoint: false
# LoRA parameters can only be specified in a config file
lora_parameters:
# The layer keys to apply LoRA to.
# These will be applied for the last lora_layers
keys: ['mlp.gate_proj', 'mlp.down_proj', 'self_attn.q_proj', 'mlp.up_proj', 'self_attn.o_proj','self_attn.v_proj', 'self_attn.k_proj']
rank: 128
alpha: 256
scale: 10.0
dropout: 0.05
# Schedule can only be specified in a config file, uncomment to use.
#lr_schedule:
# name: cosine_decay
# warmup: 100 # 0 for no warmup
# warmup_init: 1e-7 # 0 if not specified
# arguments: [1e-6, 1000, 1e-7] # passed to scheduler
``` | {"language": ["en"], "license": "llama3", "datasets": ["mzbac/glaive-function-calling-v2-llama-3-format"]} | mzbac/llama-3-8B-Instruct-function-calling | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:mzbac/glaive-function-calling-v2-llama-3-format",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T00:57:38+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #dataset-mzbac/glaive-function-calling-v2-llama-3-format #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model
This model is fine-tuned based on Meta-Llama/Meta-Llama-3-8B instructions via mlx-lm.
Note: The glaive-function-calling-v2 dataset contains some invalid JSON and single quotes for the arguments' values. I have re-trained the model based on cleaned-up data. If you encounter issues with the function calling JSON format, you may try this new version here: URL
## Usage
## Training hyperparameters
lora_config.yaml
| [
"# Model\n\nThis model is fine-tuned based on Meta-Llama/Meta-Llama-3-8B instructions via mlx-lm.\n\nNote: The glaive-function-calling-v2 dataset contains some invalid JSON and single quotes for the arguments' values. I have re-trained the model based on cleaned-up data. If you encounter issues with the function calling JSON format, you may try this new version here: URL",
"## Usage",
"## Training hyperparameters\nlora_config.yaml"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #dataset-mzbac/glaive-function-calling-v2-llama-3-format #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model\n\nThis model is fine-tuned based on Meta-Llama/Meta-Llama-3-8B instructions via mlx-lm.\n\nNote: The glaive-function-calling-v2 dataset contains some invalid JSON and single quotes for the arguments' values. I have re-trained the model based on cleaned-up data. If you encounter issues with the function calling JSON format, you may try this new version here: URL",
"## Usage",
"## Training hyperparameters\nlora_config.yaml"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_PASOL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_PASOL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_PASOL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T00:59:33+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_train_Instruction0_PASOL_v1
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_train_Instruction0_PASOL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_train_Instruction0_PASOL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_APSOL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_APSOL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_APSOL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:00:26+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_train_Instruction0_APSOL_v1
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_train_Instruction0_APSOL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_train_Instruction0_APSOL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | leminhnhat/bug-impact-level-falcon-7b | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:00:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_4
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3](https://huggingface.co/ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_userresponse_iter_4", "results": []}]} | ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:01:16+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_4
This model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| [
"# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_4\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_4\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5633
- F1 Score: 0.8590
- Accuracy: 0.859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4994 | 1.34 | 200 | 0.4682 | 0.7780 | 0.778 |
| 0.4392 | 2.68 | 400 | 0.4531 | 0.7976 | 0.798 |
| 0.4164 | 4.03 | 600 | 0.4410 | 0.7987 | 0.799 |
| 0.4047 | 5.37 | 800 | 0.4483 | 0.7984 | 0.799 |
| 0.3858 | 6.71 | 1000 | 0.4345 | 0.8053 | 0.806 |
| 0.3737 | 8.05 | 1200 | 0.4310 | 0.8200 | 0.82 |
| 0.3573 | 9.4 | 1400 | 0.4410 | 0.8099 | 0.81 |
| 0.3463 | 10.74 | 1600 | 0.4223 | 0.8069 | 0.807 |
| 0.3289 | 12.08 | 1800 | 0.4386 | 0.8066 | 0.807 |
| 0.3161 | 13.42 | 2000 | 0.4191 | 0.8126 | 0.813 |
| 0.2995 | 14.77 | 2200 | 0.4017 | 0.8209 | 0.821 |
| 0.2898 | 16.11 | 2400 | 0.4091 | 0.8309 | 0.831 |
| 0.2732 | 17.45 | 2600 | 0.4053 | 0.8340 | 0.834 |
| 0.2635 | 18.79 | 2800 | 0.4117 | 0.8310 | 0.831 |
| 0.2499 | 20.13 | 3000 | 0.3964 | 0.8510 | 0.851 |
| 0.2425 | 21.48 | 3200 | 0.4229 | 0.8436 | 0.844 |
| 0.2274 | 22.82 | 3400 | 0.4003 | 0.8509 | 0.851 |
| 0.2198 | 24.16 | 3600 | 0.4151 | 0.8530 | 0.853 |
| 0.2084 | 25.5 | 3800 | 0.4365 | 0.8488 | 0.849 |
| 0.2023 | 26.85 | 4000 | 0.4383 | 0.8464 | 0.847 |
| 0.1914 | 28.19 | 4200 | 0.4295 | 0.8590 | 0.859 |
| 0.1869 | 29.53 | 4400 | 0.4562 | 0.8647 | 0.865 |
| 0.182 | 30.87 | 4600 | 0.4308 | 0.8560 | 0.856 |
| 0.1731 | 32.21 | 4800 | 0.4319 | 0.8680 | 0.868 |
| 0.1691 | 33.56 | 5000 | 0.4492 | 0.8659 | 0.866 |
| 0.1658 | 34.9 | 5200 | 0.4321 | 0.8640 | 0.864 |
| 0.1495 | 36.24 | 5400 | 0.4748 | 0.8544 | 0.855 |
| 0.1496 | 37.58 | 5600 | 0.4670 | 0.8709 | 0.871 |
| 0.1468 | 38.93 | 5800 | 0.4527 | 0.8709 | 0.871 |
| 0.1377 | 40.27 | 6000 | 0.4790 | 0.8747 | 0.875 |
| 0.1372 | 41.61 | 6200 | 0.4706 | 0.8749 | 0.875 |
| 0.1304 | 42.95 | 6400 | 0.4814 | 0.8758 | 0.876 |
| 0.1323 | 44.3 | 6600 | 0.4978 | 0.8678 | 0.868 |
| 0.1271 | 45.64 | 6800 | 0.4728 | 0.8759 | 0.876 |
| 0.1229 | 46.98 | 7000 | 0.5029 | 0.8748 | 0.875 |
| 0.1169 | 48.32 | 7200 | 0.5104 | 0.8798 | 0.88 |
| 0.1194 | 49.66 | 7400 | 0.5029 | 0.8728 | 0.873 |
| 0.1135 | 51.01 | 7600 | 0.4992 | 0.8809 | 0.881 |
| 0.1065 | 52.35 | 7800 | 0.5255 | 0.8737 | 0.874 |
| 0.109 | 53.69 | 8000 | 0.5078 | 0.8789 | 0.879 |
| 0.1066 | 55.03 | 8200 | 0.4901 | 0.8830 | 0.883 |
| 0.1047 | 56.38 | 8400 | 0.5498 | 0.8707 | 0.871 |
| 0.105 | 57.72 | 8600 | 0.5313 | 0.8737 | 0.874 |
| 0.1023 | 59.06 | 8800 | 0.5276 | 0.8738 | 0.874 |
| 0.1058 | 60.4 | 9000 | 0.5221 | 0.8718 | 0.872 |
| 0.0966 | 61.74 | 9200 | 0.5393 | 0.8707 | 0.871 |
| 0.0987 | 63.09 | 9400 | 0.5360 | 0.8728 | 0.873 |
| 0.0956 | 64.43 | 9600 | 0.5407 | 0.8737 | 0.874 |
| 0.0902 | 65.77 | 9800 | 0.5391 | 0.8738 | 0.874 |
| 0.0944 | 67.11 | 10000 | 0.5416 | 0.8748 | 0.875 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_4-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:02:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_4-seqsight\_4096\_512\_46M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5633
* F1 Score: 0.8590
* Accuracy: 0.859
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5315
- F1 Score: 0.7102
- Accuracy: 0.714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6226 | 0.93 | 200 | 0.5869 | 0.6793 | 0.68 |
| 0.5993 | 1.87 | 400 | 0.5752 | 0.692 | 0.692 |
| 0.591 | 2.8 | 600 | 0.5625 | 0.6920 | 0.693 |
| 0.5831 | 3.74 | 800 | 0.5563 | 0.6959 | 0.696 |
| 0.577 | 4.67 | 1000 | 0.5522 | 0.6977 | 0.698 |
| 0.574 | 5.61 | 1200 | 0.5500 | 0.7049 | 0.705 |
| 0.5676 | 6.54 | 1400 | 0.5458 | 0.7045 | 0.705 |
| 0.5636 | 7.48 | 1600 | 0.5450 | 0.7120 | 0.712 |
| 0.5573 | 8.41 | 1800 | 0.5424 | 0.7060 | 0.706 |
| 0.5589 | 9.35 | 2000 | 0.5400 | 0.7207 | 0.721 |
| 0.5506 | 10.28 | 2200 | 0.5428 | 0.7069 | 0.707 |
| 0.5493 | 11.21 | 2400 | 0.5354 | 0.7245 | 0.725 |
| 0.5483 | 12.15 | 2600 | 0.5316 | 0.7176 | 0.719 |
| 0.5444 | 13.08 | 2800 | 0.5328 | 0.7232 | 0.725 |
| 0.5452 | 14.02 | 3000 | 0.5326 | 0.7201 | 0.72 |
| 0.5421 | 14.95 | 3200 | 0.5282 | 0.7258 | 0.726 |
| 0.5399 | 15.89 | 3400 | 0.5276 | 0.7306 | 0.731 |
| 0.5373 | 16.82 | 3600 | 0.5328 | 0.7191 | 0.719 |
| 0.5357 | 17.76 | 3800 | 0.5267 | 0.7308 | 0.732 |
| 0.5358 | 18.69 | 4000 | 0.5293 | 0.7160 | 0.716 |
| 0.536 | 19.63 | 4200 | 0.5270 | 0.7248 | 0.726 |
| 0.5352 | 20.56 | 4400 | 0.5266 | 0.7281 | 0.729 |
| 0.5321 | 21.5 | 4600 | 0.5293 | 0.7079 | 0.708 |
| 0.529 | 22.43 | 4800 | 0.5266 | 0.7239 | 0.724 |
| 0.5304 | 23.36 | 5000 | 0.5281 | 0.7250 | 0.725 |
| 0.5283 | 24.3 | 5200 | 0.5289 | 0.7099 | 0.71 |
| 0.5249 | 25.23 | 5400 | 0.5245 | 0.7247 | 0.726 |
| 0.5307 | 26.17 | 5600 | 0.5234 | 0.7255 | 0.726 |
| 0.522 | 27.1 | 5800 | 0.5223 | 0.7278 | 0.729 |
| 0.5212 | 28.04 | 6000 | 0.5244 | 0.7169 | 0.717 |
| 0.5232 | 28.97 | 6200 | 0.5285 | 0.7181 | 0.718 |
| 0.5234 | 29.91 | 6400 | 0.5250 | 0.7268 | 0.727 |
| 0.522 | 30.84 | 6600 | 0.5201 | 0.7236 | 0.724 |
| 0.5196 | 31.78 | 6800 | 0.5209 | 0.7269 | 0.727 |
| 0.5165 | 32.71 | 7000 | 0.5219 | 0.7266 | 0.727 |
| 0.523 | 33.64 | 7200 | 0.5196 | 0.7205 | 0.721 |
| 0.5156 | 34.58 | 7400 | 0.5252 | 0.7210 | 0.721 |
| 0.518 | 35.51 | 7600 | 0.5207 | 0.7210 | 0.721 |
| 0.5168 | 36.45 | 7800 | 0.5248 | 0.7190 | 0.719 |
| 0.5147 | 37.38 | 8000 | 0.5231 | 0.7221 | 0.722 |
| 0.5117 | 38.32 | 8200 | 0.5204 | 0.7262 | 0.727 |
| 0.5136 | 39.25 | 8400 | 0.5201 | 0.7226 | 0.723 |
| 0.5128 | 40.19 | 8600 | 0.5218 | 0.7220 | 0.722 |
| 0.5125 | 41.12 | 8800 | 0.5211 | 0.7228 | 0.723 |
| 0.5149 | 42.06 | 9000 | 0.5206 | 0.7239 | 0.724 |
| 0.5136 | 42.99 | 9200 | 0.5205 | 0.7230 | 0.723 |
| 0.5104 | 43.93 | 9400 | 0.5213 | 0.7191 | 0.719 |
| 0.511 | 44.86 | 9600 | 0.5198 | 0.7258 | 0.726 |
| 0.5117 | 45.79 | 9800 | 0.5206 | 0.7299 | 0.73 |
| 0.5078 | 46.73 | 10000 | 0.5208 | 0.7269 | 0.727 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:02:54+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_3-seqsight\_4096\_512\_46M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5315
* F1 Score: 0.7102
* Accuracy: 0.714
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for johnsnowlabs/JSL-MedLlama-3-70B-v2.0
[<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com)
This model is developed by [John Snow Labs](https://www.johnsnowlabs.com/).
This model is trained on medical datasets to provide state-of-the-art performance on biomedical benchmarks: [Open Medical LLM Leaderboard](https://huggingface.co/spaces/openlifescienceai/open_medical_llm_leaderboard).
This model is available under a [CC-BY-NC-ND](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en) license and must also conform to this [Acceptable Use Policy](https://huggingface.co/johnsnowlabs). If you need to license this model for commercial use, please contact us at [email protected].
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "johnsnowlabs/JSL-MedLlama-3-70B-v2.0"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🏆 Evaluation
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------------------------|-------|------|-----:|--------|-----:|---|-----:|
|stem |N/A |none | 0|acc |0.7773|± |0.0049|
| | |none | 0|acc_norm|0.7505|± |0.0059|
| - medmcqa |Yaml |none | 0|acc |0.7413|± |0.0068|
| | |none | 0|acc_norm|0.7413|± |0.0068|
| - medqa_4options |Yaml |none | 0|acc |0.7808|± |0.0116|
| | |none | 0|acc_norm|0.7808|± |0.0116|
| - anatomy (mmlu) | 0|none | 0|acc |0.8444|± |0.0313|
| - clinical_knowledge (mmlu) | 0|none | 0|acc |0.9245|± |0.0163|
| - college_biology (mmlu) | 0|none | 0|acc |0.9375|± |0.0202|
| - college_medicine (mmlu) | 0|none | 0|acc |0.8555|± |0.0268|
| - medical_genetics (mmlu) | 0|none | 0|acc |0.9300|± |0.0256|
| - professional_medicine (mmlu)| 0|none | 0|acc |0.9375|± |0.0147|
| - pubmedqa | 1|none | 0|acc |0.7820|± |0.0185|
|Groups|Version|Filter|n-shot| Metric |Value | |Stderr|
|------|-------|------|-----:|--------|-----:|---|-----:|
|stem |N/A |none | 0|acc |0.7773|± |0.0049|
| | |none | 0|acc_norm|0.7505|± |0.0059| | {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["medical"]} | johnsnowlabs/JSL-MedLlama-3-70B-v2.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:04:43+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #medical #conversational #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Model Card for johnsnowlabs/JSL-MedLlama-3-70B-v2.0
===================================================
<img src="URL
This model is developed by John Snow Labs.
This model is trained on medical datasets to provide state-of-the-art performance on biomedical benchmarks: Open Medical LLM Leaderboard.
This model is available under a CC-BY-NC-ND license and must also conform to this Acceptable Use Policy. If you need to license this model for commercial use, please contact us at info@URL.
Usage
-----
Evaluation
----------
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #medical #conversational #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null |
# MeliodasExperiment26-7B
MeliodasExperiment26-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: AurelPx/Meliodas-7b-dare
- model: yam-peleg/Experiment26-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/MeliodasExperiment26-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/MeliodasExperiment26-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-27T01:08:31+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# MeliodasExperiment26-7B
MeliodasExperiment26-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# MeliodasExperiment26-7B\n\nMeliodasExperiment26-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# MeliodasExperiment26-7B\n\nMeliodasExperiment26-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "widget": [{"example_title": "OpenBioLLM-70B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-70B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-70B-2.4bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"conversational",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:08:34+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-70B builds upon the powerful foundations of the Meta-Llama-3-70B-Instruct and Meta-Llama-3-70B-Instruct models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 70 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: Meta-Llama-3-70B-Instruct</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-70B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 8</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5223
- F1 Score: 0.7373
- Accuracy: 0.739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6108 | 0.93 | 200 | 0.5608 | 0.6923 | 0.693 |
| 0.5825 | 1.87 | 400 | 0.5563 | 0.7069 | 0.707 |
| 0.5685 | 2.8 | 600 | 0.5375 | 0.7158 | 0.716 |
| 0.5571 | 3.74 | 800 | 0.5398 | 0.7010 | 0.701 |
| 0.549 | 4.67 | 1000 | 0.5198 | 0.7276 | 0.728 |
| 0.5494 | 5.61 | 1200 | 0.5403 | 0.7126 | 0.713 |
| 0.56 | 6.54 | 1400 | 0.5349 | 0.7261 | 0.726 |
| 0.5408 | 7.48 | 1600 | 0.5267 | 0.7380 | 0.739 |
| 0.5303 | 8.41 | 1800 | 0.5230 | 0.7210 | 0.721 |
| 0.526 | 9.35 | 2000 | 0.5232 | 0.7299 | 0.732 |
| 0.5144 | 10.28 | 2200 | 0.5267 | 0.7450 | 0.745 |
| 0.5123 | 11.21 | 2400 | 0.5370 | 0.7337 | 0.735 |
| 0.5033 | 12.15 | 2600 | 0.5448 | 0.7354 | 0.738 |
| 0.4978 | 13.08 | 2800 | 0.5457 | 0.7251 | 0.73 |
| 0.4941 | 14.02 | 3000 | 0.5301 | 0.7247 | 0.725 |
| 0.4876 | 14.95 | 3200 | 0.5307 | 0.7329 | 0.734 |
| 0.4826 | 15.89 | 3400 | 0.5366 | 0.7454 | 0.746 |
| 0.4758 | 16.82 | 3600 | 0.5519 | 0.7237 | 0.724 |
| 0.4688 | 17.76 | 3800 | 0.5397 | 0.7368 | 0.737 |
| 0.4654 | 18.69 | 4000 | 0.5552 | 0.7381 | 0.738 |
| 0.4639 | 19.63 | 4200 | 0.5405 | 0.7293 | 0.731 |
| 0.4603 | 20.56 | 4400 | 0.5599 | 0.7334 | 0.734 |
| 0.4531 | 21.5 | 4600 | 0.5641 | 0.7199 | 0.72 |
| 0.4439 | 22.43 | 4800 | 0.5755 | 0.7261 | 0.726 |
| 0.445 | 23.36 | 5000 | 0.5866 | 0.7261 | 0.726 |
| 0.4357 | 24.3 | 5200 | 0.5901 | 0.7194 | 0.72 |
| 0.4346 | 25.23 | 5400 | 0.5861 | 0.7320 | 0.732 |
| 0.432 | 26.17 | 5600 | 0.5853 | 0.7281 | 0.729 |
| 0.4265 | 27.1 | 5800 | 0.5771 | 0.7338 | 0.734 |
| 0.4206 | 28.04 | 6000 | 0.5928 | 0.7286 | 0.729 |
| 0.4202 | 28.97 | 6200 | 0.5858 | 0.7271 | 0.727 |
| 0.4166 | 29.91 | 6400 | 0.5868 | 0.7288 | 0.729 |
| 0.4071 | 30.84 | 6600 | 0.5710 | 0.7360 | 0.737 |
| 0.4075 | 31.78 | 6800 | 0.5829 | 0.7309 | 0.731 |
| 0.3999 | 32.71 | 7000 | 0.5909 | 0.7318 | 0.732 |
| 0.3992 | 33.64 | 7200 | 0.5878 | 0.7334 | 0.734 |
| 0.396 | 34.58 | 7400 | 0.6131 | 0.726 | 0.726 |
| 0.3995 | 35.51 | 7600 | 0.6044 | 0.7311 | 0.731 |
| 0.394 | 36.45 | 7800 | 0.6331 | 0.7281 | 0.728 |
| 0.3908 | 37.38 | 8000 | 0.6152 | 0.7338 | 0.734 |
| 0.3844 | 38.32 | 8200 | 0.6255 | 0.7268 | 0.727 |
| 0.3858 | 39.25 | 8400 | 0.6303 | 0.7271 | 0.727 |
| 0.3823 | 40.19 | 8600 | 0.6294 | 0.7300 | 0.73 |
| 0.3774 | 41.12 | 8800 | 0.6246 | 0.7296 | 0.73 |
| 0.3794 | 42.06 | 9000 | 0.6387 | 0.7329 | 0.733 |
| 0.3797 | 42.99 | 9200 | 0.6269 | 0.7280 | 0.728 |
| 0.3781 | 43.93 | 9400 | 0.6261 | 0.7280 | 0.728 |
| 0.3771 | 44.86 | 9600 | 0.6308 | 0.7369 | 0.737 |
| 0.3733 | 45.79 | 9800 | 0.6318 | 0.7350 | 0.735 |
| 0.3734 | 46.73 | 10000 | 0.6363 | 0.7370 | 0.737 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:09:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_3-seqsight\_4096\_512\_46M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5223
* F1 Score: 0.7373
* Accuracy: 0.739
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5282
- F1 Score: 0.7300
- Accuracy: 0.733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6154 | 0.93 | 200 | 0.5729 | 0.6841 | 0.684 |
| 0.589 | 1.87 | 400 | 0.5770 | 0.6839 | 0.685 |
| 0.5798 | 2.8 | 600 | 0.5512 | 0.7087 | 0.709 |
| 0.5697 | 3.74 | 800 | 0.5494 | 0.7090 | 0.709 |
| 0.5615 | 4.67 | 1000 | 0.5368 | 0.7056 | 0.706 |
| 0.5596 | 5.61 | 1200 | 0.5311 | 0.7168 | 0.718 |
| 0.5522 | 6.54 | 1400 | 0.5327 | 0.7135 | 0.714 |
| 0.5454 | 7.48 | 1600 | 0.5327 | 0.7259 | 0.726 |
| 0.5423 | 8.41 | 1800 | 0.5324 | 0.7204 | 0.721 |
| 0.5427 | 9.35 | 2000 | 0.5291 | 0.7278 | 0.729 |
| 0.5336 | 10.28 | 2200 | 0.5332 | 0.7271 | 0.727 |
| 0.5328 | 11.21 | 2400 | 0.5336 | 0.7270 | 0.728 |
| 0.529 | 12.15 | 2600 | 0.5335 | 0.7315 | 0.734 |
| 0.5253 | 13.08 | 2800 | 0.5434 | 0.7352 | 0.74 |
| 0.5254 | 14.02 | 3000 | 0.5290 | 0.7342 | 0.735 |
| 0.5218 | 14.95 | 3200 | 0.5234 | 0.7374 | 0.739 |
| 0.5188 | 15.89 | 3400 | 0.5281 | 0.7245 | 0.725 |
| 0.5138 | 16.82 | 3600 | 0.5324 | 0.7216 | 0.722 |
| 0.5121 | 17.76 | 3800 | 0.5307 | 0.7321 | 0.733 |
| 0.5101 | 18.69 | 4000 | 0.5323 | 0.7181 | 0.718 |
| 0.5105 | 19.63 | 4200 | 0.5319 | 0.7356 | 0.737 |
| 0.5089 | 20.56 | 4400 | 0.5318 | 0.7387 | 0.74 |
| 0.505 | 21.5 | 4600 | 0.5371 | 0.7210 | 0.721 |
| 0.4987 | 22.43 | 4800 | 0.5385 | 0.7294 | 0.73 |
| 0.5028 | 23.36 | 5000 | 0.5404 | 0.7281 | 0.728 |
| 0.4989 | 24.3 | 5200 | 0.5474 | 0.7233 | 0.724 |
| 0.494 | 25.23 | 5400 | 0.5340 | 0.7340 | 0.735 |
| 0.498 | 26.17 | 5600 | 0.5314 | 0.7328 | 0.734 |
| 0.4909 | 27.1 | 5800 | 0.5373 | 0.7275 | 0.728 |
| 0.4893 | 28.04 | 6000 | 0.5386 | 0.7297 | 0.73 |
| 0.4903 | 28.97 | 6200 | 0.5429 | 0.7150 | 0.715 |
| 0.4912 | 29.91 | 6400 | 0.5367 | 0.7233 | 0.724 |
| 0.4869 | 30.84 | 6600 | 0.5278 | 0.7353 | 0.736 |
| 0.4845 | 31.78 | 6800 | 0.5310 | 0.7338 | 0.734 |
| 0.4836 | 32.71 | 7000 | 0.5335 | 0.7375 | 0.738 |
| 0.4839 | 33.64 | 7200 | 0.5342 | 0.7209 | 0.721 |
| 0.4788 | 34.58 | 7400 | 0.5403 | 0.7180 | 0.718 |
| 0.4833 | 35.51 | 7600 | 0.5345 | 0.7190 | 0.719 |
| 0.4801 | 36.45 | 7800 | 0.5404 | 0.7220 | 0.722 |
| 0.4782 | 37.38 | 8000 | 0.5353 | 0.7308 | 0.731 |
| 0.4756 | 38.32 | 8200 | 0.5350 | 0.7366 | 0.737 |
| 0.4741 | 39.25 | 8400 | 0.5389 | 0.7359 | 0.736 |
| 0.4746 | 40.19 | 8600 | 0.5369 | 0.7318 | 0.732 |
| 0.4745 | 41.12 | 8800 | 0.5377 | 0.7328 | 0.733 |
| 0.4741 | 42.06 | 9000 | 0.5390 | 0.7280 | 0.728 |
| 0.4745 | 42.99 | 9200 | 0.5382 | 0.7260 | 0.726 |
| 0.472 | 43.93 | 9400 | 0.5387 | 0.7251 | 0.725 |
| 0.4707 | 44.86 | 9600 | 0.5377 | 0.7300 | 0.73 |
| 0.4701 | 45.79 | 9800 | 0.5391 | 0.728 | 0.728 |
| 0.4677 | 46.73 | 10000 | 0.5395 | 0.7290 | 0.729 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:09:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_3-seqsight\_4096\_512\_46M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5282
* F1 Score: 0.7300
* Accuracy: 0.733
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_4
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_userresponse_iter_4", "results": []}]} | ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:10:49+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_4
This model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| [
"# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_4\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_4\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4291
- F1 Score: 0.7986
- Accuracy: 0.799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5673 | 1.34 | 200 | 0.5263 | 0.7249 | 0.726 |
| 0.5317 | 2.68 | 400 | 0.5122 | 0.7597 | 0.76 |
| 0.5187 | 4.03 | 600 | 0.5044 | 0.7569 | 0.757 |
| 0.5111 | 5.37 | 800 | 0.4975 | 0.7610 | 0.761 |
| 0.507 | 6.71 | 1000 | 0.4986 | 0.7528 | 0.753 |
| 0.498 | 8.05 | 1200 | 0.4975 | 0.7568 | 0.757 |
| 0.4929 | 9.4 | 1400 | 0.4913 | 0.7577 | 0.758 |
| 0.4904 | 10.74 | 1600 | 0.4857 | 0.7549 | 0.755 |
| 0.4838 | 12.08 | 1800 | 0.4980 | 0.7636 | 0.764 |
| 0.4827 | 13.42 | 2000 | 0.4954 | 0.7569 | 0.757 |
| 0.4799 | 14.77 | 2200 | 0.4875 | 0.7448 | 0.746 |
| 0.4719 | 16.11 | 2400 | 0.4951 | 0.7579 | 0.758 |
| 0.4755 | 17.45 | 2600 | 0.4845 | 0.7579 | 0.758 |
| 0.4659 | 18.79 | 2800 | 0.4827 | 0.7617 | 0.762 |
| 0.4668 | 20.13 | 3000 | 0.4839 | 0.7590 | 0.759 |
| 0.4635 | 21.48 | 3200 | 0.4930 | 0.7630 | 0.763 |
| 0.4629 | 22.82 | 3400 | 0.4865 | 0.7679 | 0.768 |
| 0.4582 | 24.16 | 3600 | 0.4818 | 0.7678 | 0.768 |
| 0.4549 | 25.5 | 3800 | 0.4871 | 0.7581 | 0.759 |
| 0.454 | 26.85 | 4000 | 0.4836 | 0.7614 | 0.762 |
| 0.453 | 28.19 | 4200 | 0.4829 | 0.7605 | 0.761 |
| 0.4509 | 29.53 | 4400 | 0.4892 | 0.7610 | 0.761 |
| 0.4503 | 30.87 | 4600 | 0.4881 | 0.7639 | 0.764 |
| 0.4481 | 32.21 | 4800 | 0.4908 | 0.7608 | 0.761 |
| 0.4482 | 33.56 | 5000 | 0.4896 | 0.7600 | 0.76 |
| 0.4467 | 34.9 | 5200 | 0.4922 | 0.7669 | 0.767 |
| 0.4395 | 36.24 | 5400 | 0.4873 | 0.7709 | 0.771 |
| 0.4408 | 37.58 | 5600 | 0.4899 | 0.7648 | 0.765 |
| 0.441 | 38.93 | 5800 | 0.4909 | 0.7654 | 0.766 |
| 0.4376 | 40.27 | 6000 | 0.4887 | 0.7575 | 0.758 |
| 0.4365 | 41.61 | 6200 | 0.4922 | 0.7615 | 0.762 |
| 0.4366 | 42.95 | 6400 | 0.4944 | 0.7647 | 0.765 |
| 0.4332 | 44.3 | 6600 | 0.4940 | 0.7598 | 0.76 |
| 0.439 | 45.64 | 6800 | 0.4908 | 0.7649 | 0.765 |
| 0.4314 | 46.98 | 7000 | 0.4927 | 0.7637 | 0.764 |
| 0.4287 | 48.32 | 7200 | 0.4924 | 0.7629 | 0.763 |
| 0.4272 | 49.66 | 7400 | 0.4949 | 0.7647 | 0.765 |
| 0.4269 | 51.01 | 7600 | 0.4926 | 0.7637 | 0.764 |
| 0.4263 | 52.35 | 7800 | 0.4964 | 0.7577 | 0.758 |
| 0.4277 | 53.69 | 8000 | 0.4943 | 0.7582 | 0.759 |
| 0.4234 | 55.03 | 8200 | 0.4959 | 0.7579 | 0.758 |
| 0.4255 | 56.38 | 8400 | 0.4950 | 0.7599 | 0.76 |
| 0.4274 | 57.72 | 8600 | 0.4958 | 0.7535 | 0.754 |
| 0.4214 | 59.06 | 8800 | 0.4952 | 0.7547 | 0.755 |
| 0.4185 | 60.4 | 9000 | 0.4988 | 0.7588 | 0.759 |
| 0.4247 | 61.74 | 9200 | 0.4964 | 0.7577 | 0.758 |
| 0.4226 | 63.09 | 9400 | 0.4972 | 0.7618 | 0.762 |
| 0.4185 | 64.43 | 9600 | 0.4971 | 0.7616 | 0.762 |
| 0.4254 | 65.77 | 9800 | 0.4965 | 0.7587 | 0.759 |
| 0.4151 | 67.11 | 10000 | 0.4965 | 0.7596 | 0.76 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_2-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:12:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_2-seqsight\_4096\_512\_46M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4291
* F1 Score: 0.7986
* Accuracy: 0.799
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** berger815
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | berger815/sail | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:14:05+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: berger815
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: berger815\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: berger815\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4177
- F1 Score: 0.8059
- Accuracy: 0.806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5544 | 1.34 | 200 | 0.5176 | 0.7450 | 0.745 |
| 0.5145 | 2.68 | 400 | 0.5058 | 0.7586 | 0.759 |
| 0.5023 | 4.03 | 600 | 0.4987 | 0.7597 | 0.76 |
| 0.493 | 5.37 | 800 | 0.4933 | 0.7609 | 0.761 |
| 0.4858 | 6.71 | 1000 | 0.4995 | 0.7535 | 0.754 |
| 0.4776 | 8.05 | 1200 | 0.4864 | 0.7579 | 0.758 |
| 0.4701 | 9.4 | 1400 | 0.4893 | 0.7530 | 0.753 |
| 0.4687 | 10.74 | 1600 | 0.4842 | 0.7609 | 0.761 |
| 0.4636 | 12.08 | 1800 | 0.4857 | 0.7640 | 0.764 |
| 0.4569 | 13.42 | 2000 | 0.4932 | 0.7570 | 0.757 |
| 0.4552 | 14.77 | 2200 | 0.4921 | 0.7446 | 0.747 |
| 0.4445 | 16.11 | 2400 | 0.4963 | 0.7540 | 0.754 |
| 0.4464 | 17.45 | 2600 | 0.4895 | 0.7472 | 0.748 |
| 0.436 | 18.79 | 2800 | 0.4813 | 0.7595 | 0.76 |
| 0.4342 | 20.13 | 3000 | 0.4896 | 0.7710 | 0.771 |
| 0.4303 | 21.48 | 3200 | 0.4968 | 0.7720 | 0.772 |
| 0.4297 | 22.82 | 3400 | 0.4859 | 0.7587 | 0.759 |
| 0.4236 | 24.16 | 3600 | 0.4871 | 0.7634 | 0.764 |
| 0.4179 | 25.5 | 3800 | 0.4942 | 0.7676 | 0.768 |
| 0.4168 | 26.85 | 4000 | 0.4995 | 0.7535 | 0.754 |
| 0.4116 | 28.19 | 4200 | 0.4928 | 0.7600 | 0.76 |
| 0.4127 | 29.53 | 4400 | 0.4995 | 0.7506 | 0.751 |
| 0.4094 | 30.87 | 4600 | 0.4962 | 0.7628 | 0.763 |
| 0.4058 | 32.21 | 4800 | 0.5047 | 0.7649 | 0.765 |
| 0.4039 | 33.56 | 5000 | 0.5042 | 0.7620 | 0.762 |
| 0.4001 | 34.9 | 5200 | 0.5061 | 0.75 | 0.75 |
| 0.3923 | 36.24 | 5400 | 0.5088 | 0.7546 | 0.755 |
| 0.392 | 37.58 | 5600 | 0.5064 | 0.7567 | 0.757 |
| 0.3905 | 38.93 | 5800 | 0.5066 | 0.7522 | 0.753 |
| 0.3837 | 40.27 | 6000 | 0.5184 | 0.7472 | 0.749 |
| 0.3843 | 41.61 | 6200 | 0.5116 | 0.7604 | 0.761 |
| 0.3827 | 42.95 | 6400 | 0.5182 | 0.7619 | 0.762 |
| 0.3759 | 44.3 | 6600 | 0.5226 | 0.7499 | 0.75 |
| 0.3827 | 45.64 | 6800 | 0.5227 | 0.7590 | 0.759 |
| 0.3748 | 46.98 | 7000 | 0.5185 | 0.7599 | 0.76 |
| 0.3724 | 48.32 | 7200 | 0.5180 | 0.7597 | 0.76 |
| 0.3689 | 49.66 | 7400 | 0.5216 | 0.7549 | 0.755 |
| 0.3688 | 51.01 | 7600 | 0.5168 | 0.7558 | 0.756 |
| 0.3682 | 52.35 | 7800 | 0.5278 | 0.7558 | 0.756 |
| 0.3671 | 53.69 | 8000 | 0.5224 | 0.7484 | 0.749 |
| 0.3623 | 55.03 | 8200 | 0.5258 | 0.76 | 0.76 |
| 0.3639 | 56.38 | 8400 | 0.5292 | 0.7527 | 0.753 |
| 0.3653 | 57.72 | 8600 | 0.5249 | 0.7476 | 0.748 |
| 0.3592 | 59.06 | 8800 | 0.5289 | 0.7520 | 0.752 |
| 0.3548 | 60.4 | 9000 | 0.5363 | 0.7509 | 0.751 |
| 0.361 | 61.74 | 9200 | 0.5303 | 0.7498 | 0.75 |
| 0.3578 | 63.09 | 9400 | 0.5297 | 0.7530 | 0.753 |
| 0.355 | 64.43 | 9600 | 0.5298 | 0.7497 | 0.75 |
| 0.3611 | 65.77 | 9800 | 0.5298 | 0.7499 | 0.75 |
| 0.3525 | 67.11 | 10000 | 0.5308 | 0.7508 | 0.751 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_2-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:16:29+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_2-seqsight\_4096\_512\_46M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4177
* F1 Score: 0.8059
* Accuracy: 0.806
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4244
- F1 Score: 0.8144
- Accuracy: 0.815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5473 | 1.34 | 200 | 0.5045 | 0.7480 | 0.748 |
| 0.5072 | 2.68 | 400 | 0.5003 | 0.7693 | 0.77 |
| 0.4937 | 4.03 | 600 | 0.4978 | 0.7570 | 0.758 |
| 0.4809 | 5.37 | 800 | 0.4858 | 0.7698 | 0.77 |
| 0.4694 | 6.71 | 1000 | 0.4840 | 0.7699 | 0.77 |
| 0.4588 | 8.05 | 1200 | 0.4779 | 0.7718 | 0.772 |
| 0.4478 | 9.4 | 1400 | 0.4832 | 0.7700 | 0.77 |
| 0.4415 | 10.74 | 1600 | 0.4833 | 0.7599 | 0.76 |
| 0.4325 | 12.08 | 1800 | 0.4849 | 0.7457 | 0.746 |
| 0.4207 | 13.42 | 2000 | 0.4910 | 0.7690 | 0.769 |
| 0.4159 | 14.77 | 2200 | 0.4891 | 0.7501 | 0.752 |
| 0.4005 | 16.11 | 2400 | 0.5130 | 0.766 | 0.766 |
| 0.397 | 17.45 | 2600 | 0.4905 | 0.7658 | 0.766 |
| 0.3812 | 18.79 | 2800 | 0.4931 | 0.7539 | 0.754 |
| 0.38 | 20.13 | 3000 | 0.5029 | 0.7689 | 0.769 |
| 0.366 | 21.48 | 3200 | 0.5259 | 0.7650 | 0.765 |
| 0.3615 | 22.82 | 3400 | 0.5142 | 0.7600 | 0.76 |
| 0.3533 | 24.16 | 3600 | 0.5133 | 0.7558 | 0.756 |
| 0.3404 | 25.5 | 3800 | 0.5249 | 0.7559 | 0.756 |
| 0.3372 | 26.85 | 4000 | 0.5349 | 0.7529 | 0.753 |
| 0.3282 | 28.19 | 4200 | 0.5312 | 0.7620 | 0.762 |
| 0.3224 | 29.53 | 4400 | 0.5504 | 0.7568 | 0.757 |
| 0.3177 | 30.87 | 4600 | 0.5576 | 0.7588 | 0.759 |
| 0.3093 | 32.21 | 4800 | 0.5691 | 0.7640 | 0.764 |
| 0.3065 | 33.56 | 5000 | 0.5583 | 0.7640 | 0.764 |
| 0.293 | 34.9 | 5200 | 0.5930 | 0.7468 | 0.747 |
| 0.2857 | 36.24 | 5400 | 0.5897 | 0.7610 | 0.761 |
| 0.2831 | 37.58 | 5600 | 0.5956 | 0.7559 | 0.756 |
| 0.2758 | 38.93 | 5800 | 0.5931 | 0.7526 | 0.753 |
| 0.2693 | 40.27 | 6000 | 0.6084 | 0.7511 | 0.752 |
| 0.2641 | 41.61 | 6200 | 0.6205 | 0.7555 | 0.756 |
| 0.2624 | 42.95 | 6400 | 0.6236 | 0.7680 | 0.768 |
| 0.2524 | 44.3 | 6600 | 0.6303 | 0.7579 | 0.758 |
| 0.2578 | 45.64 | 6800 | 0.6329 | 0.7589 | 0.759 |
| 0.251 | 46.98 | 7000 | 0.6389 | 0.7609 | 0.761 |
| 0.2421 | 48.32 | 7200 | 0.6307 | 0.7588 | 0.759 |
| 0.2375 | 49.66 | 7400 | 0.6399 | 0.7619 | 0.762 |
| 0.2346 | 51.01 | 7600 | 0.6300 | 0.7610 | 0.761 |
| 0.2343 | 52.35 | 7800 | 0.6554 | 0.7590 | 0.759 |
| 0.2256 | 53.69 | 8000 | 0.6766 | 0.7539 | 0.754 |
| 0.2247 | 55.03 | 8200 | 0.6589 | 0.7599 | 0.76 |
| 0.2262 | 56.38 | 8400 | 0.6732 | 0.7629 | 0.763 |
| 0.2237 | 57.72 | 8600 | 0.6709 | 0.7568 | 0.757 |
| 0.219 | 59.06 | 8800 | 0.6830 | 0.7598 | 0.76 |
| 0.2108 | 60.4 | 9000 | 0.6949 | 0.7589 | 0.759 |
| 0.2168 | 61.74 | 9200 | 0.6767 | 0.7609 | 0.761 |
| 0.2109 | 63.09 | 9400 | 0.6819 | 0.7520 | 0.752 |
| 0.2095 | 64.43 | 9600 | 0.6936 | 0.7629 | 0.763 |
| 0.2098 | 65.77 | 9800 | 0.6890 | 0.7590 | 0.759 |
| 0.2081 | 67.11 | 10000 | 0.6882 | 0.7630 | 0.763 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_tf_2-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:16:37+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_tf\_2-seqsight\_4096\_512\_46M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4244
* F1 Score: 0.8144
* Accuracy: 0.815
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4763
- F1 Score: 0.4656
- Accuracy: 0.4567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1823 | 0.35 | 200 | 2.1755 | 0.1308 | 0.1550 |
| 2.1713 | 0.7 | 400 | 2.1631 | 0.1232 | 0.1561 |
| 2.1562 | 1.05 | 600 | 2.1489 | 0.1267 | 0.1598 |
| 2.1422 | 1.4 | 800 | 2.1259 | 0.1591 | 0.1855 |
| 2.1125 | 1.75 | 1000 | 2.0921 | 0.2001 | 0.2172 |
| 2.0522 | 2.09 | 1200 | 2.0180 | 0.2094 | 0.2229 |
| 2.0008 | 2.44 | 1400 | 2.0021 | 0.2175 | 0.2235 |
| 1.9803 | 2.79 | 1600 | 1.9652 | 0.2402 | 0.2482 |
| 1.9533 | 3.14 | 1800 | 1.9438 | 0.2502 | 0.2591 |
| 1.9191 | 3.49 | 2000 | 1.9104 | 0.2594 | 0.2681 |
| 1.898 | 3.84 | 2200 | 1.8906 | 0.2787 | 0.2826 |
| 1.874 | 4.19 | 2400 | 1.8751 | 0.2928 | 0.2907 |
| 1.8486 | 4.54 | 2600 | 1.8183 | 0.3136 | 0.3125 |
| 1.8371 | 4.89 | 2800 | 1.8066 | 0.3193 | 0.3211 |
| 1.8173 | 5.24 | 3000 | 1.7830 | 0.3277 | 0.3274 |
| 1.7984 | 5.58 | 3200 | 1.7757 | 0.3458 | 0.3403 |
| 1.7731 | 5.93 | 3400 | 1.7557 | 0.3397 | 0.3405 |
| 1.7691 | 6.28 | 3600 | 1.7237 | 0.3532 | 0.3449 |
| 1.7417 | 6.63 | 3800 | 1.7201 | 0.3604 | 0.3551 |
| 1.7384 | 6.98 | 4000 | 1.6938 | 0.3712 | 0.3625 |
| 1.7209 | 7.33 | 4200 | 1.6768 | 0.3679 | 0.3640 |
| 1.7136 | 7.68 | 4400 | 1.6731 | 0.3859 | 0.3760 |
| 1.7088 | 8.03 | 4600 | 1.6489 | 0.3892 | 0.3776 |
| 1.6831 | 8.38 | 4800 | 1.6485 | 0.3828 | 0.3748 |
| 1.6867 | 8.73 | 5000 | 1.6315 | 0.3822 | 0.3806 |
| 1.6676 | 9.08 | 5200 | 1.6224 | 0.4034 | 0.3878 |
| 1.6583 | 9.42 | 5400 | 1.5976 | 0.3983 | 0.3943 |
| 1.6444 | 9.77 | 5600 | 1.5920 | 0.4158 | 0.4085 |
| 1.6423 | 10.12 | 5800 | 1.5823 | 0.4177 | 0.4045 |
| 1.6388 | 10.47 | 6000 | 1.5672 | 0.4187 | 0.4129 |
| 1.6279 | 10.82 | 6200 | 1.5768 | 0.4160 | 0.4046 |
| 1.6126 | 11.17 | 6400 | 1.5586 | 0.4314 | 0.4186 |
| 1.6081 | 11.52 | 6600 | 1.5500 | 0.4324 | 0.4235 |
| 1.6111 | 11.87 | 6800 | 1.5411 | 0.4372 | 0.4322 |
| 1.6029 | 12.22 | 7000 | 1.5316 | 0.4445 | 0.4380 |
| 1.587 | 12.57 | 7200 | 1.5282 | 0.4388 | 0.4354 |
| 1.5933 | 12.91 | 7400 | 1.5249 | 0.4433 | 0.4384 |
| 1.5853 | 13.26 | 7600 | 1.5170 | 0.4500 | 0.4448 |
| 1.5763 | 13.61 | 7800 | 1.5102 | 0.4547 | 0.4473 |
| 1.5714 | 13.96 | 8000 | 1.5047 | 0.4578 | 0.4488 |
| 1.5627 | 14.31 | 8200 | 1.5040 | 0.4541 | 0.4468 |
| 1.5692 | 14.66 | 8400 | 1.5010 | 0.4592 | 0.4483 |
| 1.5674 | 15.01 | 8600 | 1.4997 | 0.4604 | 0.4539 |
| 1.5651 | 15.36 | 8800 | 1.4966 | 0.4577 | 0.4481 |
| 1.551 | 15.71 | 9000 | 1.4878 | 0.4596 | 0.4544 |
| 1.5595 | 16.06 | 9200 | 1.4864 | 0.4639 | 0.4564 |
| 1.5496 | 16.4 | 9400 | 1.4850 | 0.4660 | 0.4594 |
| 1.5506 | 16.75 | 9600 | 1.4848 | 0.4615 | 0.4557 |
| 1.5503 | 17.1 | 9800 | 1.4855 | 0.4610 | 0.4553 |
| 1.5622 | 17.45 | 10000 | 1.4856 | 0.4608 | 0.4543 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_virus_covid-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:17:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_virus\_covid-seqsight\_4096\_512\_46M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4763
* F1 Score: 0.4656
* Accuracy: 0.4567
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1081
- F1 Score: 0.5885
- Accuracy: 0.5836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1813 | 0.35 | 200 | 2.1726 | 0.1379 | 0.1588 |
| 2.1662 | 0.7 | 400 | 2.1509 | 0.1381 | 0.1652 |
| 2.1037 | 1.05 | 600 | 2.0112 | 0.2257 | 0.2359 |
| 1.9819 | 1.4 | 800 | 1.9200 | 0.2583 | 0.2687 |
| 1.8868 | 1.75 | 1000 | 1.8249 | 0.3095 | 0.3102 |
| 1.8165 | 2.09 | 1200 | 1.7372 | 0.3276 | 0.3355 |
| 1.7544 | 2.44 | 1400 | 1.6856 | 0.3698 | 0.3568 |
| 1.6947 | 2.79 | 1600 | 1.6089 | 0.3855 | 0.3813 |
| 1.6505 | 3.14 | 1800 | 1.5679 | 0.4055 | 0.4033 |
| 1.6094 | 3.49 | 2000 | 1.5561 | 0.4103 | 0.4004 |
| 1.5905 | 3.84 | 2200 | 1.5347 | 0.4307 | 0.4259 |
| 1.5608 | 4.19 | 2400 | 1.4948 | 0.4371 | 0.4297 |
| 1.5426 | 4.54 | 2600 | 1.4540 | 0.4686 | 0.4594 |
| 1.5276 | 4.89 | 2800 | 1.4528 | 0.4607 | 0.4530 |
| 1.501 | 5.24 | 3000 | 1.4105 | 0.4782 | 0.4726 |
| 1.492 | 5.58 | 3200 | 1.3877 | 0.4903 | 0.4852 |
| 1.456 | 5.93 | 3400 | 1.3723 | 0.4943 | 0.4924 |
| 1.4394 | 6.28 | 3600 | 1.3620 | 0.4935 | 0.4774 |
| 1.4203 | 6.63 | 3800 | 1.3357 | 0.5014 | 0.4939 |
| 1.4029 | 6.98 | 4000 | 1.3041 | 0.5138 | 0.5080 |
| 1.3836 | 7.33 | 4200 | 1.2996 | 0.5153 | 0.5026 |
| 1.3682 | 7.68 | 4400 | 1.2885 | 0.5183 | 0.5065 |
| 1.3661 | 8.03 | 4600 | 1.2755 | 0.5245 | 0.5152 |
| 1.3392 | 8.38 | 4800 | 1.2771 | 0.5231 | 0.5151 |
| 1.3477 | 8.73 | 5000 | 1.2642 | 0.5257 | 0.5173 |
| 1.3332 | 9.08 | 5200 | 1.2546 | 0.5357 | 0.5257 |
| 1.3227 | 9.42 | 5400 | 1.2411 | 0.5363 | 0.5317 |
| 1.3048 | 9.77 | 5600 | 1.2315 | 0.5333 | 0.5315 |
| 1.3071 | 10.12 | 5800 | 1.2159 | 0.5470 | 0.5413 |
| 1.2925 | 10.47 | 6000 | 1.1937 | 0.5575 | 0.5557 |
| 1.274 | 10.82 | 6200 | 1.1954 | 0.5525 | 0.5405 |
| 1.2555 | 11.17 | 6400 | 1.1973 | 0.5570 | 0.5376 |
| 1.2539 | 11.52 | 6600 | 1.1798 | 0.5659 | 0.5568 |
| 1.2569 | 11.87 | 6800 | 1.1652 | 0.5652 | 0.5551 |
| 1.2441 | 12.22 | 7000 | 1.1593 | 0.5680 | 0.5586 |
| 1.2274 | 12.57 | 7200 | 1.1594 | 0.5758 | 0.5644 |
| 1.232 | 12.91 | 7400 | 1.1576 | 0.5676 | 0.5604 |
| 1.2214 | 13.26 | 7600 | 1.1409 | 0.5732 | 0.5635 |
| 1.2106 | 13.61 | 7800 | 1.1350 | 0.5753 | 0.5673 |
| 1.2049 | 13.96 | 8000 | 1.1360 | 0.5757 | 0.5671 |
| 1.2026 | 14.31 | 8200 | 1.1227 | 0.5800 | 0.5723 |
| 1.2048 | 14.66 | 8400 | 1.1148 | 0.5845 | 0.5755 |
| 1.1968 | 15.01 | 8600 | 1.1170 | 0.5797 | 0.5730 |
| 1.1996 | 15.36 | 8800 | 1.1114 | 0.5863 | 0.5770 |
| 1.1758 | 15.71 | 9000 | 1.1122 | 0.5819 | 0.5766 |
| 1.1923 | 16.06 | 9200 | 1.1091 | 0.5861 | 0.5783 |
| 1.1735 | 16.4 | 9400 | 1.1096 | 0.5851 | 0.5801 |
| 1.1799 | 16.75 | 9600 | 1.1072 | 0.5845 | 0.5782 |
| 1.1815 | 17.1 | 9800 | 1.1085 | 0.5853 | 0.5787 |
| 1.181 | 17.45 | 10000 | 1.1072 | 0.5865 | 0.5802 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_virus_covid-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:17:17+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_virus\_covid-seqsight\_4096\_512\_46M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1081
* F1 Score: 0.5885
* Accuracy: 0.5836
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_SPAOL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_SPAOL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_SPAOL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:17:27+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_train_Instruction0_SPAOL_v1
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_train_Instruction0_SPAOL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_train_Instruction0_SPAOL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "widget": [{"example_title": "OpenBioLLM-70B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-70B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-70B-3.5bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"conversational",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:18:25+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-70B builds upon the powerful foundations of the Meta-Llama-3-70B-Instruct and Meta-Llama-3-70B-Instruct models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 70 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: Meta-Llama-3-70B-Instruct</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-70B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 8</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-envi-tran-10000
This model is a fine-tuned version of [unsloth/llama-2-7b-bnb-4bit](https://huggingface.co/unsloth/llama-2-7b-bnb-4bit) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 4
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1238 | 0.33 | 104 | 0.9544 |
| 1.0108 | 0.67 | 208 | 0.9355 |
| 1.0058 | 1.0 | 312 | 0.9277 |
| 0.9212 | 1.33 | 416 | 0.9363 |
| 0.9255 | 1.66 | 520 | 0.9323 |
| 0.9222 | 2.0 | 624 | 0.9275 |
| 0.7975 | 2.33 | 728 | 0.9634 |
| 0.802 | 2.66 | 832 | 0.9622 |
| 0.7979 | 3.0 | 936 | 0.9654 |
| 0.6747 | 3.33 | 1040 | 1.0305 |
| 0.6646 | 3.66 | 1144 | 1.0320 |
| 0.6697 | 3.99 | 1248 | 1.0257 |
| 0.5873 | 4.33 | 1352 | 1.0879 |
| 0.5792 | 4.66 | 1456 | 1.0902 |
| 0.5819 | 4.99 | 1560 | 1.0906 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "generated_from_trainer"], "base_model": "unsloth/llama-2-7b-bnb-4bit", "model-index": [{"name": "llama2-envi-tran-10000", "results": []}]} | mob2711/llama2-envi-tran-10000 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-04-27T01:22:19+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/llama-2-7b-bnb-4bit #license-apache-2.0 #region-us
| llama2-envi-tran-10000
======================
This model is a fine-tuned version of unsloth/llama-2-7b-bnb-4bit on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0906
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 8
* eval\_batch\_size: 4
* seed: 3407
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 5
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.16.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 4\n* seed: 3407\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/llama-2-7b-bnb-4bit #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 4\n* seed: 3407\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9287
- F1 Score: 0.6412
- Accuracy: 0.6377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1803 | 0.35 | 200 | 2.1724 | 0.1376 | 0.1585 |
| 2.1374 | 0.7 | 400 | 2.0178 | 0.2202 | 0.2345 |
| 1.9484 | 1.05 | 600 | 1.8007 | 0.2992 | 0.3036 |
| 1.7811 | 1.4 | 800 | 1.6615 | 0.3624 | 0.3585 |
| 1.6659 | 1.75 | 1000 | 1.5691 | 0.3986 | 0.3904 |
| 1.6027 | 2.09 | 1200 | 1.5203 | 0.4294 | 0.4277 |
| 1.5534 | 2.44 | 1400 | 1.4686 | 0.4508 | 0.4411 |
| 1.5018 | 2.79 | 1600 | 1.4101 | 0.4675 | 0.4615 |
| 1.4466 | 3.14 | 1800 | 1.3581 | 0.4899 | 0.4825 |
| 1.4037 | 3.49 | 2000 | 1.3517 | 0.5056 | 0.4869 |
| 1.377 | 3.84 | 2200 | 1.3013 | 0.5177 | 0.5086 |
| 1.3263 | 4.19 | 2400 | 1.2510 | 0.5357 | 0.5213 |
| 1.3028 | 4.54 | 2600 | 1.2173 | 0.5464 | 0.5419 |
| 1.2795 | 4.89 | 2800 | 1.1947 | 0.5545 | 0.5496 |
| 1.2464 | 5.24 | 3000 | 1.1615 | 0.5598 | 0.5533 |
| 1.2267 | 5.58 | 3200 | 1.1412 | 0.5649 | 0.5616 |
| 1.1977 | 5.93 | 3400 | 1.1200 | 0.5797 | 0.5745 |
| 1.1765 | 6.28 | 3600 | 1.1011 | 0.5797 | 0.5772 |
| 1.1565 | 6.63 | 3800 | 1.0822 | 0.5955 | 0.5883 |
| 1.1371 | 6.98 | 4000 | 1.0718 | 0.5937 | 0.5913 |
| 1.1211 | 7.33 | 4200 | 1.0479 | 0.6054 | 0.6015 |
| 1.0994 | 7.68 | 4400 | 1.0352 | 0.6091 | 0.6043 |
| 1.1013 | 8.03 | 4600 | 1.0245 | 0.6073 | 0.6066 |
| 1.0758 | 8.38 | 4800 | 1.0250 | 0.6050 | 0.6031 |
| 1.078 | 8.73 | 5000 | 1.0151 | 0.6113 | 0.6070 |
| 1.0639 | 9.08 | 5200 | 1.0008 | 0.6213 | 0.6143 |
| 1.0561 | 9.42 | 5400 | 0.9956 | 0.6152 | 0.6144 |
| 1.0403 | 9.77 | 5600 | 0.9815 | 0.6249 | 0.6214 |
| 1.0424 | 10.12 | 5800 | 0.9702 | 0.6322 | 0.6296 |
| 1.0282 | 10.47 | 6000 | 0.9714 | 0.6280 | 0.6233 |
| 1.016 | 10.82 | 6200 | 0.9732 | 0.6329 | 0.6233 |
| 1.0003 | 11.17 | 6400 | 0.9658 | 0.6333 | 0.6260 |
| 0.9984 | 11.52 | 6600 | 0.9598 | 0.6355 | 0.6270 |
| 1.0111 | 11.87 | 6800 | 0.9515 | 0.6374 | 0.6307 |
| 0.9989 | 12.22 | 7000 | 0.9558 | 0.6380 | 0.6304 |
| 0.9844 | 12.57 | 7200 | 0.9471 | 0.6473 | 0.6359 |
| 0.9938 | 12.91 | 7400 | 0.9502 | 0.6389 | 0.6315 |
| 0.9856 | 13.26 | 7600 | 0.9465 | 0.6386 | 0.6295 |
| 0.9793 | 13.61 | 7800 | 0.9420 | 0.6399 | 0.6309 |
| 0.9789 | 13.96 | 8000 | 0.9383 | 0.6436 | 0.6387 |
| 0.9654 | 14.31 | 8200 | 0.9317 | 0.6444 | 0.6380 |
| 0.9687 | 14.66 | 8400 | 0.9353 | 0.6449 | 0.6370 |
| 0.9773 | 15.01 | 8600 | 0.9351 | 0.6430 | 0.6386 |
| 0.9756 | 15.36 | 8800 | 0.9294 | 0.6461 | 0.6404 |
| 0.9526 | 15.71 | 9000 | 0.9306 | 0.6453 | 0.6406 |
| 0.9682 | 16.06 | 9200 | 0.9280 | 0.6466 | 0.6404 |
| 0.959 | 16.4 | 9400 | 0.9295 | 0.6447 | 0.6413 |
| 0.9609 | 16.75 | 9600 | 0.9266 | 0.6493 | 0.6435 |
| 0.961 | 17.1 | 9800 | 0.9272 | 0.6478 | 0.6419 |
| 0.9618 | 17.45 | 10000 | 0.9274 | 0.6472 | 0.6416 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_virus_covid-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-27T01:23:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_virus\_covid-seqsight\_4096\_512\_46M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9287
* F1 Score: 0.6412
* Accuracy: 0.6377
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4320
- F1 Score: 0.8043
- Accuracy: 0.8042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5915 | 5.13 | 200 | 0.5190 | 0.7535 | 0.7569 |
| 0.4672 | 10.26 | 400 | 0.4808 | 0.8027 | 0.8026 |
| 0.4394 | 15.38 | 600 | 0.4687 | 0.8043 | 0.8042 |
| 0.4249 | 20.51 | 800 | 0.4618 | 0.8027 | 0.8026 |
| 0.4111 | 25.64 | 1000 | 0.4633 | 0.8027 | 0.8026 |
| 0.3977 | 30.77 | 1200 | 0.4591 | 0.7960 | 0.7961 |
| 0.3912 | 35.9 | 1400 | 0.4620 | 0.7910 | 0.7912 |
| 0.3805 | 41.03 | 1600 | 0.4727 | 0.7970 | 0.7977 |
| 0.3741 | 46.15 | 1800 | 0.4665 | 0.7926 | 0.7928 |
| 0.3649 | 51.28 | 2000 | 0.4732 | 0.7929 | 0.7928 |
| 0.3586 | 56.41 | 2200 | 0.4765 | 0.8044 | 0.8042 |
| 0.3517 | 61.54 | 2400 | 0.4861 | 0.8043 | 0.8042 |
| 0.3458 | 66.67 | 2600 | 0.4956 | 0.7976 | 0.7977 |
| 0.3372 | 71.79 | 2800 | 0.4924 | 0.7978 | 0.7977 |
| 0.3364 | 76.92 | 3000 | 0.4950 | 0.7913 | 0.7912 |
| 0.3278 | 82.05 | 3200 | 0.5055 | 0.8027 | 0.8026 |
| 0.3257 | 87.18 | 3400 | 0.5066 | 0.7945 | 0.7945 |
| 0.3199 | 92.31 | 3600 | 0.5064 | 0.7995 | 0.7993 |
| 0.3145 | 97.44 | 3800 | 0.5131 | 0.7946 | 0.7945 |
| 0.3079 | 102.56 | 4000 | 0.5167 | 0.7961 | 0.7961 |
| 0.3039 | 107.69 | 4200 | 0.5190 | 0.7962 | 0.7961 |
| 0.3005 | 112.82 | 4400 | 0.5159 | 0.7995 | 0.7993 |
| 0.299 | 117.95 | 4600 | 0.5180 | 0.7962 | 0.7961 |
| 0.2902 | 123.08 | 4800 | 0.5274 | 0.7962 | 0.7961 |
| 0.2866 | 128.21 | 5000 | 0.5370 | 0.7930 | 0.7928 |
| 0.2893 | 133.33 | 5200 | 0.5397 | 0.7895 | 0.7896 |
| 0.2819 | 138.46 | 5400 | 0.5363 | 0.7977 | 0.7977 |
| 0.28 | 143.59 | 5600 | 0.5435 | 0.7881 | 0.7879 |
| 0.2805 | 148.72 | 5800 | 0.5464 | 0.7977 | 0.7977 |
| 0.2758 | 153.85 | 6000 | 0.5409 | 0.7929 | 0.7928 |
| 0.2747 | 158.97 | 6200 | 0.5477 | 0.7913 | 0.7912 |
| 0.2723 | 164.1 | 6400 | 0.5517 | 0.7961 | 0.7961 |
| 0.2702 | 169.23 | 6600 | 0.5529 | 0.7913 | 0.7912 |
| 0.2657 | 174.36 | 6800 | 0.5580 | 0.7897 | 0.7896 |
| 0.2628 | 179.49 | 7000 | 0.5535 | 0.7913 | 0.7912 |
| 0.2604 | 184.62 | 7200 | 0.5698 | 0.7912 | 0.7912 |
| 0.2595 | 189.74 | 7400 | 0.5668 | 0.7929 | 0.7928 |
| 0.2612 | 194.87 | 7600 | 0.5702 | 0.7962 | 0.7961 |
| 0.2589 | 200.0 | 7800 | 0.5739 | 0.7896 | 0.7896 |
| 0.2595 | 205.13 | 8000 | 0.5653 | 0.7880 | 0.7879 |
| 0.2554 | 210.26 | 8200 | 0.5758 | 0.7897 | 0.7896 |
| 0.2514 | 215.38 | 8400 | 0.5829 | 0.7863 | 0.7863 |
| 0.2531 | 220.51 | 8600 | 0.5787 | 0.7880 | 0.7879 |
| 0.2506 | 225.64 | 8800 | 0.5808 | 0.7881 | 0.7879 |
| 0.2529 | 230.77 | 9000 | 0.5799 | 0.7929 | 0.7928 |
| 0.247 | 235.9 | 9200 | 0.5822 | 0.7946 | 0.7945 |
| 0.2498 | 241.03 | 9400 | 0.5814 | 0.7913 | 0.7912 |
| 0.2467 | 246.15 | 9600 | 0.5820 | 0.7930 | 0.7928 |
| 0.2499 | 251.28 | 9800 | 0.5824 | 0.7913 | 0.7912 |
| 0.2494 | 256.41 | 10000 | 0.5827 | 0.7913 | 0.7912 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T01:23:36+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_8192\_512\_30M-L1\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4320
* F1 Score: 0.8043
* Accuracy: 0.8042
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4340
- F1 Score: 0.8107
- Accuracy: 0.8108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5361 | 5.13 | 200 | 0.4726 | 0.7999 | 0.8010 |
| 0.4311 | 10.26 | 400 | 0.4750 | 0.7991 | 0.7993 |
| 0.3906 | 15.38 | 600 | 0.4535 | 0.8156 | 0.8157 |
| 0.3587 | 20.51 | 800 | 0.4676 | 0.8189 | 0.8189 |
| 0.338 | 25.64 | 1000 | 0.4852 | 0.8141 | 0.8140 |
| 0.3033 | 30.77 | 1200 | 0.5177 | 0.7927 | 0.7928 |
| 0.2868 | 35.9 | 1400 | 0.5366 | 0.8027 | 0.8026 |
| 0.264 | 41.03 | 1600 | 0.5506 | 0.8093 | 0.8091 |
| 0.2439 | 46.15 | 1800 | 0.5622 | 0.7961 | 0.7961 |
| 0.2265 | 51.28 | 2000 | 0.5868 | 0.8010 | 0.8010 |
| 0.2082 | 56.41 | 2200 | 0.6659 | 0.7979 | 0.7977 |
| 0.1918 | 61.54 | 2400 | 0.7247 | 0.7755 | 0.7765 |
| 0.1845 | 66.67 | 2600 | 0.6934 | 0.7815 | 0.7814 |
| 0.1682 | 71.79 | 2800 | 0.7161 | 0.7878 | 0.7879 |
| 0.1634 | 76.92 | 3000 | 0.7996 | 0.7763 | 0.7765 |
| 0.1513 | 82.05 | 3200 | 0.8184 | 0.7799 | 0.7798 |
| 0.1417 | 87.18 | 3400 | 0.8684 | 0.7552 | 0.7569 |
| 0.1415 | 92.31 | 3600 | 0.8256 | 0.7864 | 0.7863 |
| 0.1262 | 97.44 | 3800 | 0.8991 | 0.7653 | 0.7651 |
| 0.1194 | 102.56 | 4000 | 0.9260 | 0.7717 | 0.7716 |
| 0.1182 | 107.69 | 4200 | 0.9309 | 0.7648 | 0.7651 |
| 0.1072 | 112.82 | 4400 | 0.9617 | 0.7685 | 0.7684 |
| 0.1034 | 117.95 | 4600 | 0.9810 | 0.7750 | 0.7749 |
| 0.1029 | 123.08 | 4800 | 0.9391 | 0.7766 | 0.7765 |
| 0.0926 | 128.21 | 5000 | 1.0178 | 0.7668 | 0.7667 |
| 0.0956 | 133.33 | 5200 | 0.9644 | 0.7749 | 0.7749 |
| 0.0872 | 138.46 | 5400 | 1.0404 | 0.7766 | 0.7765 |
| 0.0829 | 143.59 | 5600 | 1.0505 | 0.7797 | 0.7798 |
| 0.0811 | 148.72 | 5800 | 1.0447 | 0.7815 | 0.7814 |
| 0.0792 | 153.85 | 6000 | 1.0644 | 0.7797 | 0.7798 |
| 0.0774 | 158.97 | 6200 | 1.1230 | 0.7782 | 0.7781 |
| 0.0736 | 164.1 | 6400 | 1.0981 | 0.7799 | 0.7798 |
| 0.0747 | 169.23 | 6600 | 1.0630 | 0.7798 | 0.7798 |
| 0.0729 | 174.36 | 6800 | 1.0963 | 0.7813 | 0.7814 |
| 0.07 | 179.49 | 7000 | 1.1042 | 0.7766 | 0.7765 |
| 0.0679 | 184.62 | 7200 | 1.1413 | 0.7718 | 0.7716 |
| 0.0635 | 189.74 | 7400 | 1.1705 | 0.7750 | 0.7749 |
| 0.0645 | 194.87 | 7600 | 1.1721 | 0.7782 | 0.7781 |
| 0.0669 | 200.0 | 7800 | 1.1447 | 0.7767 | 0.7765 |
| 0.0659 | 205.13 | 8000 | 1.1749 | 0.7766 | 0.7765 |
| 0.0616 | 210.26 | 8200 | 1.1785 | 0.7750 | 0.7749 |
| 0.0553 | 215.38 | 8400 | 1.2185 | 0.7701 | 0.7700 |
| 0.0645 | 220.51 | 8600 | 1.1544 | 0.7766 | 0.7765 |
| 0.0574 | 225.64 | 8800 | 1.2041 | 0.7734 | 0.7732 |
| 0.0631 | 230.77 | 9000 | 1.1826 | 0.7782 | 0.7781 |
| 0.0556 | 235.9 | 9200 | 1.1807 | 0.7782 | 0.7781 |
| 0.061 | 241.03 | 9400 | 1.1816 | 0.7782 | 0.7781 |
| 0.0562 | 246.15 | 9600 | 1.2023 | 0.7814 | 0.7814 |
| 0.0563 | 251.28 | 9800 | 1.1978 | 0.7783 | 0.7781 |
| 0.0553 | 256.41 | 10000 | 1.1962 | 0.7831 | 0.7830 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T01:26:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_8192\_512\_30M-L8\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4340
* F1 Score: 0.8107
* Accuracy: 0.8108
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_OPASL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_OPASL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_OPASL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:29:08+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_train_Instruction0_OPASL_v1
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_train_Instruction0_OPASL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_train_Instruction0_OPASL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4592
- F1 Score: 0.8205
- Accuracy: 0.8206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5086 | 5.13 | 200 | 0.4565 | 0.7930 | 0.7928 |
| 0.3952 | 10.26 | 400 | 0.5182 | 0.7677 | 0.7700 |
| 0.33 | 15.38 | 600 | 0.4784 | 0.8207 | 0.8206 |
| 0.2734 | 20.51 | 800 | 0.5296 | 0.8043 | 0.8042 |
| 0.2262 | 25.64 | 1000 | 0.6202 | 0.8027 | 0.8026 |
| 0.1855 | 30.77 | 1200 | 0.7224 | 0.7817 | 0.7830 |
| 0.1554 | 35.9 | 1400 | 0.7096 | 0.8106 | 0.8108 |
| 0.1271 | 41.03 | 1600 | 0.7358 | 0.8093 | 0.8091 |
| 0.1132 | 46.15 | 1800 | 0.7646 | 0.8027 | 0.8026 |
| 0.0916 | 51.28 | 2000 | 0.9272 | 0.7911 | 0.7912 |
| 0.0795 | 56.41 | 2200 | 0.9784 | 0.7924 | 0.7928 |
| 0.0763 | 61.54 | 2400 | 1.0530 | 0.7833 | 0.7847 |
| 0.0701 | 66.67 | 2600 | 0.9257 | 0.8027 | 0.8026 |
| 0.0577 | 71.79 | 2800 | 1.0282 | 0.7993 | 0.7993 |
| 0.056 | 76.92 | 3000 | 1.0811 | 0.7979 | 0.7977 |
| 0.0545 | 82.05 | 3200 | 1.0417 | 0.7831 | 0.7830 |
| 0.0496 | 87.18 | 3400 | 0.9956 | 0.7913 | 0.7912 |
| 0.0465 | 92.31 | 3600 | 1.1224 | 0.7911 | 0.7912 |
| 0.0431 | 97.44 | 3800 | 1.0531 | 0.7946 | 0.7945 |
| 0.0384 | 102.56 | 4000 | 1.1811 | 0.7995 | 0.7993 |
| 0.0383 | 107.69 | 4200 | 1.1002 | 0.8011 | 0.8010 |
| 0.0355 | 112.82 | 4400 | 1.1293 | 0.7995 | 0.7993 |
| 0.0357 | 117.95 | 4600 | 1.1393 | 0.8027 | 0.8026 |
| 0.0332 | 123.08 | 4800 | 1.2111 | 0.7944 | 0.7945 |
| 0.0289 | 128.21 | 5000 | 1.2221 | 0.7995 | 0.7993 |
| 0.0283 | 133.33 | 5200 | 1.2444 | 0.7977 | 0.7977 |
| 0.0287 | 138.46 | 5400 | 1.2123 | 0.8027 | 0.8026 |
| 0.0266 | 143.59 | 5600 | 1.2331 | 0.8044 | 0.8042 |
| 0.0273 | 148.72 | 5800 | 1.2408 | 0.8011 | 0.8010 |
| 0.0255 | 153.85 | 6000 | 1.2152 | 0.8044 | 0.8042 |
| 0.0252 | 158.97 | 6200 | 1.2034 | 0.8011 | 0.8010 |
| 0.0232 | 164.1 | 6400 | 1.1980 | 0.7995 | 0.7993 |
| 0.0252 | 169.23 | 6600 | 1.2003 | 0.7913 | 0.7912 |
| 0.0218 | 174.36 | 6800 | 1.2275 | 0.7946 | 0.7945 |
| 0.0209 | 179.49 | 7000 | 1.3166 | 0.7993 | 0.7993 |
| 0.0202 | 184.62 | 7200 | 1.2827 | 0.7995 | 0.7993 |
| 0.0202 | 189.74 | 7400 | 1.2810 | 0.7962 | 0.7961 |
| 0.0184 | 194.87 | 7600 | 1.2615 | 0.7929 | 0.7928 |
| 0.0194 | 200.0 | 7800 | 1.2767 | 0.7930 | 0.7928 |
| 0.02 | 205.13 | 8000 | 1.2672 | 0.7978 | 0.7977 |
| 0.0211 | 210.26 | 8200 | 1.2335 | 0.7962 | 0.7961 |
| 0.0172 | 215.38 | 8400 | 1.3205 | 0.7962 | 0.7961 |
| 0.018 | 220.51 | 8600 | 1.3163 | 0.7979 | 0.7977 |
| 0.016 | 225.64 | 8800 | 1.3450 | 0.7913 | 0.7912 |
| 0.0175 | 230.77 | 9000 | 1.3278 | 0.7995 | 0.7993 |
| 0.0151 | 235.9 | 9200 | 1.3556 | 0.7962 | 0.7961 |
| 0.0159 | 241.03 | 9400 | 1.3456 | 0.7979 | 0.7977 |
| 0.0148 | 246.15 | 9600 | 1.3545 | 0.7962 | 0.7961 |
| 0.015 | 251.28 | 9800 | 1.3576 | 0.7979 | 0.7977 |
| 0.0136 | 256.41 | 10000 | 1.3621 | 0.7962 | 0.7961 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T01:30:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_8192\_512\_30M-L32\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4592
* F1 Score: 0.8205
* Accuracy: 0.8206
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1256
- F1 Score: 0.9533
- Accuracy: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3525 | 0.6 | 200 | 0.1641 | 0.9355 | 0.9356 |
| 0.1746 | 1.2 | 400 | 0.1422 | 0.9448 | 0.9448 |
| 0.1614 | 1.81 | 600 | 0.1324 | 0.9467 | 0.9467 |
| 0.1496 | 2.41 | 800 | 0.1273 | 0.9487 | 0.9487 |
| 0.1434 | 3.01 | 1000 | 0.1223 | 0.9514 | 0.9514 |
| 0.1416 | 3.61 | 1200 | 0.1195 | 0.9529 | 0.9529 |
| 0.1389 | 4.22 | 1400 | 0.1215 | 0.9527 | 0.9527 |
| 0.1355 | 4.82 | 1600 | 0.1168 | 0.9540 | 0.9540 |
| 0.132 | 5.42 | 1800 | 0.1173 | 0.9542 | 0.9542 |
| 0.1277 | 6.02 | 2000 | 0.1153 | 0.9533 | 0.9533 |
| 0.1289 | 6.63 | 2200 | 0.1133 | 0.9542 | 0.9542 |
| 0.1253 | 7.23 | 2400 | 0.1126 | 0.9553 | 0.9553 |
| 0.122 | 7.83 | 2600 | 0.1120 | 0.9557 | 0.9557 |
| 0.125 | 8.43 | 2800 | 0.1117 | 0.9574 | 0.9574 |
| 0.1245 | 9.04 | 3000 | 0.1137 | 0.9544 | 0.9544 |
| 0.1219 | 9.64 | 3200 | 0.1172 | 0.9557 | 0.9557 |
| 0.1194 | 10.24 | 3400 | 0.1133 | 0.9557 | 0.9557 |
| 0.1223 | 10.84 | 3600 | 0.1084 | 0.9600 | 0.9601 |
| 0.1147 | 11.45 | 3800 | 0.1077 | 0.9595 | 0.9595 |
| 0.114 | 12.05 | 4000 | 0.1084 | 0.9597 | 0.9597 |
| 0.1192 | 12.65 | 4200 | 0.1076 | 0.9568 | 0.9568 |
| 0.1193 | 13.25 | 4400 | 0.1075 | 0.9587 | 0.9587 |
| 0.1142 | 13.86 | 4600 | 0.1064 | 0.9597 | 0.9597 |
| 0.1148 | 14.46 | 4800 | 0.1082 | 0.9583 | 0.9584 |
| 0.1122 | 15.06 | 5000 | 0.1051 | 0.9589 | 0.9589 |
| 0.1125 | 15.66 | 5200 | 0.1068 | 0.9587 | 0.9587 |
| 0.1146 | 16.27 | 5400 | 0.1062 | 0.9587 | 0.9587 |
| 0.1106 | 16.87 | 5600 | 0.1041 | 0.9604 | 0.9604 |
| 0.1111 | 17.47 | 5800 | 0.1042 | 0.9595 | 0.9595 |
| 0.1158 | 18.07 | 6000 | 0.1045 | 0.9593 | 0.9593 |
| 0.1103 | 18.67 | 6200 | 0.1052 | 0.9585 | 0.9585 |
| 0.1118 | 19.28 | 6400 | 0.1043 | 0.9604 | 0.9604 |
| 0.1096 | 19.88 | 6600 | 0.1067 | 0.9583 | 0.9584 |
| 0.1107 | 20.48 | 6800 | 0.1058 | 0.9597 | 0.9597 |
| 0.1079 | 21.08 | 7000 | 0.1042 | 0.9602 | 0.9602 |
| 0.1086 | 21.69 | 7200 | 0.1042 | 0.9612 | 0.9612 |
| 0.1078 | 22.29 | 7400 | 0.1054 | 0.9612 | 0.9612 |
| 0.1101 | 22.89 | 7600 | 0.1028 | 0.9616 | 0.9616 |
| 0.1092 | 23.49 | 7800 | 0.1026 | 0.9606 | 0.9606 |
| 0.1054 | 24.1 | 8000 | 0.1034 | 0.9602 | 0.9602 |
| 0.108 | 24.7 | 8200 | 0.1033 | 0.9599 | 0.9599 |
| 0.1071 | 25.3 | 8400 | 0.1044 | 0.9600 | 0.9601 |
| 0.1075 | 25.9 | 8600 | 0.1023 | 0.9608 | 0.9608 |
| 0.1071 | 26.51 | 8800 | 0.1024 | 0.9606 | 0.9606 |
| 0.1078 | 27.11 | 9000 | 0.1025 | 0.9608 | 0.9608 |
| 0.1052 | 27.71 | 9200 | 0.1031 | 0.9606 | 0.9606 |
| 0.1041 | 28.31 | 9400 | 0.1030 | 0.9608 | 0.9608 |
| 0.1089 | 28.92 | 9600 | 0.1026 | 0.9608 | 0.9608 |
| 0.1043 | 29.52 | 9800 | 0.1029 | 0.9608 | 0.9608 |
| 0.1089 | 30.12 | 10000 | 0.1028 | 0.9608 | 0.9608 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T01:30:18+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_8192\_512\_30M-L1\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1256
* F1 Score: 0.9533
* Accuracy: 0.9533
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "widget": [{"example_title": "OpenBioLLM-70B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-70B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-70B-4.0bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"conversational",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-27T01:31:50+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-70B builds upon the powerful foundations of the Meta-Llama-3-70B-Instruct and Meta-Llama-3-70B-Instruct models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 70 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: Meta-Llama-3-70B-Instruct</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-70B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 8</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_PAOSL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_PAOSL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_PAOSL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:31:54+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_train_Instruction0_PAOSL_v1
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_train_Instruction0_PAOSL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_train_Instruction0_PAOSL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-uncased", "model-index": [{"name": "model_3", "results": []}]} | beloiual/model_3 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:32:50+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# model_3
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# model_3\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# model_3\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_APOSL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_APOSL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_APOSL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:33:01+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CS505_COQE_viT5_train_Instruction0_APOSL_v1
This model is a fine-tuned version of VietAI/vit5-large on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505_COQE_viT5_train_Instruction0_APOSL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CS505_COQE_viT5_train_Instruction0_APOSL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_3_edges
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "model_3_edges", "results": []}]} | beloiual/model_3_edges | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:35:09+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# model_3_edges
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# model_3_edges\n\nThis model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# model_3_edges\n\nThis model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | obamaTeo/mistral-finetune-16bit-ver9-wiki-GPTQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-27T01:43:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Hello, my name is nanit. I am an AI assistant designed to help you with various tasks and provide information. I can assist with answering questions, providing recommendations, and performing calculations. I am constantly learning and improving to better serve you. If you have any specific questions or tasks you would like assistance with, please let me know. | {"license": "other", "library_name": "transformers"} | AlanRobotics/nanit_slerp | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:43:40+00:00 | [] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Hello, my name is nanit. I am an AI assistant designed to help you with various tasks and provide information. I can assist with answering questions, providing recommendations, and performing calculations. I am constantly learning and improving to better serve you. If you have any specific questions or tasks you would like assistance with, please let me know. | [] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1197
- F1 Score: 0.9561
- Accuracy: 0.9561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2892 | 0.6 | 200 | 0.1442 | 0.9422 | 0.9422 |
| 0.1464 | 1.2 | 400 | 0.1229 | 0.9527 | 0.9527 |
| 0.1407 | 1.81 | 600 | 0.1152 | 0.9527 | 0.9527 |
| 0.1283 | 2.41 | 800 | 0.1134 | 0.9538 | 0.9538 |
| 0.1259 | 3.01 | 1000 | 0.1108 | 0.9567 | 0.9567 |
| 0.1226 | 3.61 | 1200 | 0.1075 | 0.9572 | 0.9572 |
| 0.1187 | 4.22 | 1400 | 0.1088 | 0.9572 | 0.9572 |
| 0.1171 | 4.82 | 1600 | 0.1053 | 0.9597 | 0.9597 |
| 0.1127 | 5.42 | 1800 | 0.1074 | 0.9598 | 0.9599 |
| 0.1087 | 6.02 | 2000 | 0.1091 | 0.9574 | 0.9574 |
| 0.1101 | 6.63 | 2200 | 0.1026 | 0.9610 | 0.9610 |
| 0.1064 | 7.23 | 2400 | 0.1037 | 0.9612 | 0.9612 |
| 0.1036 | 7.83 | 2600 | 0.1029 | 0.9608 | 0.9608 |
| 0.107 | 8.43 | 2800 | 0.1051 | 0.9604 | 0.9604 |
| 0.1064 | 9.04 | 3000 | 0.1087 | 0.9589 | 0.9589 |
| 0.1017 | 9.64 | 3200 | 0.1077 | 0.9599 | 0.9599 |
| 0.0991 | 10.24 | 3400 | 0.1037 | 0.9602 | 0.9602 |
| 0.1013 | 10.84 | 3600 | 0.1003 | 0.9614 | 0.9614 |
| 0.0961 | 11.45 | 3800 | 0.1008 | 0.9602 | 0.9602 |
| 0.0949 | 12.05 | 4000 | 0.1015 | 0.9623 | 0.9623 |
| 0.0978 | 12.65 | 4200 | 0.0998 | 0.9625 | 0.9625 |
| 0.0992 | 13.25 | 4400 | 0.1029 | 0.9616 | 0.9616 |
| 0.0942 | 13.86 | 4600 | 0.0990 | 0.9627 | 0.9627 |
| 0.0942 | 14.46 | 4800 | 0.1035 | 0.9621 | 0.9621 |
| 0.0914 | 15.06 | 5000 | 0.0984 | 0.9627 | 0.9627 |
| 0.0906 | 15.66 | 5200 | 0.1076 | 0.9587 | 0.9587 |
| 0.0923 | 16.27 | 5400 | 0.1026 | 0.9604 | 0.9604 |
| 0.0887 | 16.87 | 5600 | 0.0984 | 0.9619 | 0.9619 |
| 0.0898 | 17.47 | 5800 | 0.0975 | 0.9627 | 0.9627 |
| 0.0934 | 18.07 | 6000 | 0.1006 | 0.9619 | 0.9619 |
| 0.0871 | 18.67 | 6200 | 0.1059 | 0.9608 | 0.9608 |
| 0.0863 | 19.28 | 6400 | 0.1008 | 0.9617 | 0.9617 |
| 0.0859 | 19.88 | 6600 | 0.1005 | 0.9627 | 0.9627 |
| 0.0868 | 20.48 | 6800 | 0.1025 | 0.9621 | 0.9621 |
| 0.0854 | 21.08 | 7000 | 0.0992 | 0.9617 | 0.9617 |
| 0.0852 | 21.69 | 7200 | 0.0999 | 0.9629 | 0.9629 |
| 0.0841 | 22.29 | 7400 | 0.1009 | 0.9631 | 0.9631 |
| 0.0862 | 22.89 | 7600 | 0.0965 | 0.9627 | 0.9627 |
| 0.083 | 23.49 | 7800 | 0.1022 | 0.9621 | 0.9621 |
| 0.0834 | 24.1 | 8000 | 0.0979 | 0.9640 | 0.9640 |
| 0.0825 | 24.7 | 8200 | 0.1071 | 0.9617 | 0.9617 |
| 0.0827 | 25.3 | 8400 | 0.1026 | 0.9625 | 0.9625 |
| 0.084 | 25.9 | 8600 | 0.0993 | 0.9627 | 0.9627 |
| 0.0817 | 26.51 | 8800 | 0.0999 | 0.9631 | 0.9631 |
| 0.0817 | 27.11 | 9000 | 0.0997 | 0.9627 | 0.9627 |
| 0.0811 | 27.71 | 9200 | 0.1009 | 0.9627 | 0.9627 |
| 0.079 | 28.31 | 9400 | 0.1009 | 0.9627 | 0.9627 |
| 0.0823 | 28.92 | 9600 | 0.0995 | 0.9623 | 0.9623 |
| 0.0789 | 29.52 | 9800 | 0.0999 | 0.9621 | 0.9621 |
| 0.0842 | 30.12 | 10000 | 0.1002 | 0.9631 | 0.9631 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T01:45:21+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_8192\_512\_30M-L8\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1197
* F1 Score: 0.9561
* Accuracy: 0.9561
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "widget": [{"example_title": "OpenBioLLM-70B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-70B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-70B-4.65bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"conversational",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T01:46:57+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-70B builds upon the powerful foundations of the Meta-Llama-3-70B-Instruct and Meta-Llama-3-70B-Instruct models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 70 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: Meta-Llama-3-70B-Instruct</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-70B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 8</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1357
- F1 Score: 0.9563
- Accuracy: 0.9563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2577 | 0.6 | 200 | 0.1254 | 0.9495 | 0.9495 |
| 0.1373 | 1.2 | 400 | 0.1190 | 0.9538 | 0.9538 |
| 0.1331 | 1.81 | 600 | 0.1103 | 0.9567 | 0.9567 |
| 0.1203 | 2.41 | 800 | 0.1086 | 0.9580 | 0.9580 |
| 0.117 | 3.01 | 1000 | 0.1067 | 0.9606 | 0.9606 |
| 0.1152 | 3.61 | 1200 | 0.1070 | 0.9578 | 0.9578 |
| 0.1095 | 4.22 | 1400 | 0.1018 | 0.9601 | 0.9601 |
| 0.1085 | 4.82 | 1600 | 0.1010 | 0.9601 | 0.9601 |
| 0.1024 | 5.42 | 1800 | 0.1047 | 0.9587 | 0.9587 |
| 0.0985 | 6.02 | 2000 | 0.1062 | 0.9593 | 0.9593 |
| 0.0992 | 6.63 | 2200 | 0.0972 | 0.9627 | 0.9627 |
| 0.0949 | 7.23 | 2400 | 0.1007 | 0.9627 | 0.9627 |
| 0.0927 | 7.83 | 2600 | 0.0981 | 0.9614 | 0.9614 |
| 0.093 | 8.43 | 2800 | 0.1121 | 0.9602 | 0.9602 |
| 0.0913 | 9.04 | 3000 | 0.1099 | 0.9593 | 0.9593 |
| 0.0873 | 9.64 | 3200 | 0.0982 | 0.9625 | 0.9625 |
| 0.0817 | 10.24 | 3400 | 0.0991 | 0.9653 | 0.9653 |
| 0.0849 | 10.84 | 3600 | 0.0946 | 0.9648 | 0.9648 |
| 0.0793 | 11.45 | 3800 | 0.0945 | 0.9653 | 0.9653 |
| 0.0775 | 12.05 | 4000 | 0.0976 | 0.9663 | 0.9663 |
| 0.0794 | 12.65 | 4200 | 0.0920 | 0.9661 | 0.9661 |
| 0.0761 | 13.25 | 4400 | 0.1008 | 0.9636 | 0.9636 |
| 0.0744 | 13.86 | 4600 | 0.0911 | 0.9655 | 0.9655 |
| 0.0725 | 14.46 | 4800 | 0.0953 | 0.9665 | 0.9665 |
| 0.069 | 15.06 | 5000 | 0.0946 | 0.9657 | 0.9657 |
| 0.0673 | 15.66 | 5200 | 0.1017 | 0.9647 | 0.9648 |
| 0.0683 | 16.27 | 5400 | 0.0974 | 0.9666 | 0.9666 |
| 0.0648 | 16.87 | 5600 | 0.0977 | 0.9659 | 0.9659 |
| 0.0618 | 17.47 | 5800 | 0.0996 | 0.9663 | 0.9663 |
| 0.0661 | 18.07 | 6000 | 0.0960 | 0.9685 | 0.9685 |
| 0.0604 | 18.67 | 6200 | 0.1063 | 0.9653 | 0.9653 |
| 0.0578 | 19.28 | 6400 | 0.1030 | 0.9670 | 0.9670 |
| 0.0597 | 19.88 | 6600 | 0.1022 | 0.9674 | 0.9674 |
| 0.0561 | 20.48 | 6800 | 0.1018 | 0.9680 | 0.9680 |
| 0.0561 | 21.08 | 7000 | 0.1037 | 0.9659 | 0.9659 |
| 0.0551 | 21.69 | 7200 | 0.1060 | 0.9661 | 0.9661 |
| 0.0553 | 22.29 | 7400 | 0.1049 | 0.9680 | 0.9680 |
| 0.0551 | 22.89 | 7600 | 0.1014 | 0.9661 | 0.9661 |
| 0.0525 | 23.49 | 7800 | 0.1117 | 0.9648 | 0.9648 |
| 0.0512 | 24.1 | 8000 | 0.1035 | 0.9682 | 0.9682 |
| 0.05 | 24.7 | 8200 | 0.1084 | 0.9648 | 0.9648 |
| 0.0496 | 25.3 | 8400 | 0.1062 | 0.9672 | 0.9672 |
| 0.0507 | 25.9 | 8600 | 0.1045 | 0.9666 | 0.9666 |
| 0.0467 | 26.51 | 8800 | 0.1060 | 0.9682 | 0.9682 |
| 0.0493 | 27.11 | 9000 | 0.1060 | 0.9672 | 0.9672 |
| 0.0459 | 27.71 | 9200 | 0.1086 | 0.9666 | 0.9666 |
| 0.0451 | 28.31 | 9400 | 0.1074 | 0.9672 | 0.9672 |
| 0.0477 | 28.92 | 9600 | 0.1075 | 0.9672 | 0.9672 |
| 0.0438 | 29.52 | 9800 | 0.1086 | 0.9680 | 0.9680 |
| 0.0471 | 30.12 | 10000 | 0.1084 | 0.9676 | 0.9676 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T01:49:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_8192\_512\_30M-L32\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1357
* F1 Score: 0.9563
* Accuracy: 0.9563
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4231
- F1 Score: 0.8098
- Accuracy: 0.8098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5588 | 0.54 | 200 | 0.4980 | 0.7634 | 0.7649 |
| 0.4803 | 1.08 | 400 | 0.4776 | 0.7765 | 0.7772 |
| 0.4628 | 1.62 | 600 | 0.4580 | 0.7869 | 0.7870 |
| 0.4571 | 2.16 | 800 | 0.4592 | 0.7882 | 0.7883 |
| 0.4496 | 2.7 | 1000 | 0.4558 | 0.7871 | 0.7875 |
| 0.4478 | 3.24 | 1200 | 0.4512 | 0.7892 | 0.7895 |
| 0.4421 | 3.78 | 1400 | 0.4567 | 0.7850 | 0.7860 |
| 0.4374 | 4.32 | 1600 | 0.4588 | 0.7866 | 0.7873 |
| 0.4377 | 4.86 | 1800 | 0.4518 | 0.7891 | 0.7895 |
| 0.4366 | 5.41 | 2000 | 0.4540 | 0.7877 | 0.7883 |
| 0.432 | 5.95 | 2200 | 0.4466 | 0.7939 | 0.7939 |
| 0.4349 | 6.49 | 2400 | 0.4450 | 0.7932 | 0.7934 |
| 0.4274 | 7.03 | 2600 | 0.4426 | 0.7986 | 0.7986 |
| 0.4312 | 7.57 | 2800 | 0.4462 | 0.7942 | 0.7944 |
| 0.4242 | 8.11 | 3000 | 0.4489 | 0.7917 | 0.7922 |
| 0.427 | 8.65 | 3200 | 0.4402 | 0.7966 | 0.7966 |
| 0.4282 | 9.19 | 3400 | 0.4431 | 0.7969 | 0.7970 |
| 0.4242 | 9.73 | 3600 | 0.4561 | 0.7871 | 0.7882 |
| 0.4275 | 10.27 | 3800 | 0.4456 | 0.7933 | 0.7937 |
| 0.4224 | 10.81 | 4000 | 0.4426 | 0.7976 | 0.7976 |
| 0.4226 | 11.35 | 4200 | 0.4479 | 0.7925 | 0.7929 |
| 0.4237 | 11.89 | 4400 | 0.4473 | 0.7917 | 0.7924 |
| 0.4201 | 12.43 | 4600 | 0.4396 | 0.7998 | 0.7998 |
| 0.4193 | 12.97 | 4800 | 0.4427 | 0.7960 | 0.7963 |
| 0.4218 | 13.51 | 5000 | 0.4529 | 0.7881 | 0.7894 |
| 0.4195 | 14.05 | 5200 | 0.4406 | 0.7973 | 0.7975 |
| 0.4216 | 14.59 | 5400 | 0.4386 | 0.7969 | 0.7970 |
| 0.4208 | 15.14 | 5600 | 0.4366 | 0.7986 | 0.7986 |
| 0.4199 | 15.68 | 5800 | 0.4480 | 0.7944 | 0.7951 |
| 0.4136 | 16.22 | 6000 | 0.4459 | 0.7959 | 0.7965 |
| 0.4206 | 16.76 | 6200 | 0.4396 | 0.7950 | 0.7954 |
| 0.4201 | 17.3 | 6400 | 0.4373 | 0.7973 | 0.7976 |
| 0.4166 | 17.84 | 6600 | 0.4403 | 0.7954 | 0.7958 |
| 0.4152 | 18.38 | 6800 | 0.4479 | 0.7956 | 0.7965 |
| 0.4167 | 18.92 | 7000 | 0.4396 | 0.7961 | 0.7966 |
| 0.4113 | 19.46 | 7200 | 0.4400 | 0.7975 | 0.7978 |
| 0.4222 | 20.0 | 7400 | 0.4378 | 0.7965 | 0.7968 |
| 0.4179 | 20.54 | 7600 | 0.4394 | 0.7961 | 0.7965 |
| 0.4143 | 21.08 | 7800 | 0.4409 | 0.7966 | 0.7970 |
| 0.4166 | 21.62 | 8000 | 0.4361 | 0.7959 | 0.7961 |
| 0.4143 | 22.16 | 8200 | 0.4396 | 0.7978 | 0.7981 |
| 0.4185 | 22.7 | 8400 | 0.4414 | 0.7981 | 0.7986 |
| 0.4112 | 23.24 | 8600 | 0.4363 | 0.7959 | 0.7961 |
| 0.4098 | 23.78 | 8800 | 0.4384 | 0.7970 | 0.7973 |
| 0.4146 | 24.32 | 9000 | 0.4372 | 0.7965 | 0.7968 |
| 0.4143 | 24.86 | 9200 | 0.4373 | 0.7963 | 0.7966 |
| 0.4212 | 25.41 | 9400 | 0.4369 | 0.7972 | 0.7975 |
| 0.4085 | 25.95 | 9600 | 0.4369 | 0.7971 | 0.7973 |
| 0.4184 | 26.49 | 9800 | 0.4384 | 0.7969 | 0.7973 |
| 0.4112 | 27.03 | 10000 | 0.4381 | 0.7970 | 0.7973 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T01:51:59+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_8192\_512\_30M-L1\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4231
* F1 Score: 0.8098
* Accuracy: 0.8098
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4079
- F1 Score: 0.8099
- Accuracy: 0.8100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5264 | 0.54 | 200 | 0.4686 | 0.7814 | 0.7818 |
| 0.4549 | 1.08 | 400 | 0.4736 | 0.7859 | 0.7872 |
| 0.4386 | 1.62 | 600 | 0.4442 | 0.7933 | 0.7934 |
| 0.4338 | 2.16 | 800 | 0.4449 | 0.7924 | 0.7927 |
| 0.4286 | 2.7 | 1000 | 0.4395 | 0.7962 | 0.7965 |
| 0.4289 | 3.24 | 1200 | 0.4338 | 0.7990 | 0.7992 |
| 0.4217 | 3.78 | 1400 | 0.4411 | 0.7937 | 0.7944 |
| 0.4167 | 4.32 | 1600 | 0.4407 | 0.7983 | 0.7986 |
| 0.4204 | 4.86 | 1800 | 0.4373 | 0.7972 | 0.7975 |
| 0.4174 | 5.41 | 2000 | 0.4398 | 0.7970 | 0.7975 |
| 0.4149 | 5.95 | 2200 | 0.4392 | 0.8025 | 0.8025 |
| 0.4181 | 6.49 | 2400 | 0.4310 | 0.8040 | 0.8041 |
| 0.4099 | 7.03 | 2600 | 0.4356 | 0.8042 | 0.8042 |
| 0.4133 | 7.57 | 2800 | 0.4323 | 0.8007 | 0.8007 |
| 0.4071 | 8.11 | 3000 | 0.4342 | 0.7974 | 0.7976 |
| 0.4064 | 8.65 | 3200 | 0.4309 | 0.8012 | 0.8014 |
| 0.4103 | 9.19 | 3400 | 0.4332 | 0.8013 | 0.8014 |
| 0.4048 | 9.73 | 3600 | 0.4405 | 0.7982 | 0.7988 |
| 0.4089 | 10.27 | 3800 | 0.4359 | 0.8012 | 0.8015 |
| 0.404 | 10.81 | 4000 | 0.4351 | 0.8049 | 0.8049 |
| 0.4014 | 11.35 | 4200 | 0.4333 | 0.8017 | 0.8019 |
| 0.4031 | 11.89 | 4400 | 0.4320 | 0.8016 | 0.8020 |
| 0.3985 | 12.43 | 4600 | 0.4300 | 0.8035 | 0.8035 |
| 0.3992 | 12.97 | 4800 | 0.4322 | 0.8041 | 0.8042 |
| 0.3996 | 13.51 | 5000 | 0.4400 | 0.7977 | 0.7986 |
| 0.3982 | 14.05 | 5200 | 0.4290 | 0.8077 | 0.8078 |
| 0.3978 | 14.59 | 5400 | 0.4289 | 0.8055 | 0.8056 |
| 0.3976 | 15.14 | 5600 | 0.4280 | 0.8074 | 0.8074 |
| 0.395 | 15.68 | 5800 | 0.4372 | 0.8009 | 0.8015 |
| 0.3912 | 16.22 | 6000 | 0.4322 | 0.8028 | 0.8034 |
| 0.3948 | 16.76 | 6200 | 0.4253 | 0.8059 | 0.8063 |
| 0.3948 | 17.3 | 6400 | 0.4250 | 0.8069 | 0.8071 |
| 0.3926 | 17.84 | 6600 | 0.4285 | 0.8068 | 0.8069 |
| 0.3893 | 18.38 | 6800 | 0.4347 | 0.8002 | 0.8010 |
| 0.3894 | 18.92 | 7000 | 0.4267 | 0.8048 | 0.8052 |
| 0.3871 | 19.46 | 7200 | 0.4276 | 0.8072 | 0.8074 |
| 0.3959 | 20.0 | 7400 | 0.4258 | 0.8069 | 0.8071 |
| 0.3886 | 20.54 | 7600 | 0.4261 | 0.8075 | 0.8078 |
| 0.3876 | 21.08 | 7800 | 0.4278 | 0.8088 | 0.8090 |
| 0.3883 | 21.62 | 8000 | 0.4230 | 0.8094 | 0.8095 |
| 0.3849 | 22.16 | 8200 | 0.4264 | 0.8072 | 0.8074 |
| 0.3894 | 22.7 | 8400 | 0.4281 | 0.8060 | 0.8064 |
| 0.3834 | 23.24 | 8600 | 0.4238 | 0.8075 | 0.8076 |
| 0.3827 | 23.78 | 8800 | 0.4259 | 0.8069 | 0.8071 |
| 0.3844 | 24.32 | 9000 | 0.4245 | 0.8082 | 0.8083 |
| 0.3844 | 24.86 | 9200 | 0.4240 | 0.8081 | 0.8083 |
| 0.3904 | 25.41 | 9400 | 0.4242 | 0.8069 | 0.8071 |
| 0.3788 | 25.95 | 9600 | 0.4256 | 0.8080 | 0.8081 |
| 0.3894 | 26.49 | 9800 | 0.4259 | 0.8055 | 0.8057 |
| 0.381 | 27.03 | 10000 | 0.4260 | 0.8067 | 0.8069 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T01:54:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_8192\_512\_30M-L8\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4079
* F1 Score: 0.8099
* Accuracy: 0.8100
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_3_edges
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "model_3_edges", "results": []}]} | anirudhramoo/model_3_edges | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:55:41+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# model_3_edges
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# model_3_edges\n\nThis model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# model_3_edges\n\nThis model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jd0g/Mistral-7B-NLI-v1.0 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:55:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/yegu2vs | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:59:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/5ww0fon | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:59:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/lfucypx | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:59:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/fcku0mx | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:59:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/ehgnwni | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:59:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/2evd8w5 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:59:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # Description
4-bit AWQ-quantized version of [stylellm/HongLouMeng-6b](https://huggingface.co/stylellm/HongLouMeng-6b)
| {"license": "other", "license_name": "yi-license", "license_link": "https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE"} | stylellm/HongLouMeng-6b-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-27T01:59:09+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| # Description
4-bit AWQ-quantized version of stylellm/HongLouMeng-6b
| [
"# Description\n4-bit AWQ-quantized version of stylellm/HongLouMeng-6b"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Description\n4-bit AWQ-quantized version of stylellm/HongLouMeng-6b"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "aaditya/OpenBioLLM-Llama3-70B", "quantized_by": "mradermacher"} | mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF | null | [
"transformers",
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"base_model:aaditya/OpenBioLLM-Llama3-70B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T01:59:56+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #base_model-aaditya/OpenBioLLM-Llama3-70B #license-llama3 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #base_model-aaditya/OpenBioLLM-Llama3-70B #license-llama3 #endpoints_compatible #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4067
- F1 Score: 0.8189
- Accuracy: 0.8189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5065 | 0.54 | 200 | 0.4591 | 0.7903 | 0.7904 |
| 0.446 | 1.08 | 400 | 0.4695 | 0.7843 | 0.7860 |
| 0.4311 | 1.62 | 600 | 0.4409 | 0.7956 | 0.7959 |
| 0.4271 | 2.16 | 800 | 0.4442 | 0.7928 | 0.7932 |
| 0.4215 | 2.7 | 1000 | 0.4350 | 0.8007 | 0.8010 |
| 0.4218 | 3.24 | 1200 | 0.4286 | 0.8030 | 0.8030 |
| 0.4131 | 3.78 | 1400 | 0.4343 | 0.7960 | 0.7965 |
| 0.4094 | 4.32 | 1600 | 0.4329 | 0.7994 | 0.7995 |
| 0.4105 | 4.86 | 1800 | 0.4361 | 0.7977 | 0.7980 |
| 0.406 | 5.41 | 2000 | 0.4350 | 0.8007 | 0.8010 |
| 0.4037 | 5.95 | 2200 | 0.4336 | 0.8078 | 0.8078 |
| 0.4044 | 6.49 | 2400 | 0.4245 | 0.8040 | 0.8041 |
| 0.3964 | 7.03 | 2600 | 0.4262 | 0.8053 | 0.8054 |
| 0.3959 | 7.57 | 2800 | 0.4255 | 0.8037 | 0.8037 |
| 0.3925 | 8.11 | 3000 | 0.4263 | 0.8048 | 0.8049 |
| 0.3865 | 8.65 | 3200 | 0.4245 | 0.8067 | 0.8069 |
| 0.3904 | 9.19 | 3400 | 0.4300 | 0.8071 | 0.8071 |
| 0.3841 | 9.73 | 3600 | 0.4312 | 0.8044 | 0.8049 |
| 0.3855 | 10.27 | 3800 | 0.4348 | 0.8053 | 0.8056 |
| 0.3805 | 10.81 | 4000 | 0.4330 | 0.8063 | 0.8063 |
| 0.3749 | 11.35 | 4200 | 0.4266 | 0.8092 | 0.8093 |
| 0.3775 | 11.89 | 4400 | 0.4275 | 0.8081 | 0.8084 |
| 0.3708 | 12.43 | 4600 | 0.4262 | 0.8130 | 0.8130 |
| 0.3731 | 12.97 | 4800 | 0.4267 | 0.8098 | 0.8100 |
| 0.3709 | 13.51 | 5000 | 0.4308 | 0.8051 | 0.8057 |
| 0.3675 | 14.05 | 5200 | 0.4300 | 0.8127 | 0.8127 |
| 0.3654 | 14.59 | 5400 | 0.4312 | 0.8115 | 0.8117 |
| 0.3665 | 15.14 | 5600 | 0.4265 | 0.8123 | 0.8123 |
| 0.3618 | 15.68 | 5800 | 0.4333 | 0.8090 | 0.8093 |
| 0.3589 | 16.22 | 6000 | 0.4283 | 0.8101 | 0.8105 |
| 0.3591 | 16.76 | 6200 | 0.4188 | 0.8100 | 0.8103 |
| 0.3592 | 17.3 | 6400 | 0.4231 | 0.8118 | 0.8120 |
| 0.3575 | 17.84 | 6600 | 0.4267 | 0.8109 | 0.8110 |
| 0.3505 | 18.38 | 6800 | 0.4350 | 0.8037 | 0.8044 |
| 0.3527 | 18.92 | 7000 | 0.4257 | 0.8120 | 0.8123 |
| 0.3468 | 19.46 | 7200 | 0.4292 | 0.8099 | 0.8101 |
| 0.3558 | 20.0 | 7400 | 0.4245 | 0.8129 | 0.8130 |
| 0.346 | 20.54 | 7600 | 0.4263 | 0.8092 | 0.8095 |
| 0.3479 | 21.08 | 7800 | 0.4252 | 0.8115 | 0.8117 |
| 0.3449 | 21.62 | 8000 | 0.4228 | 0.8111 | 0.8111 |
| 0.3426 | 22.16 | 8200 | 0.4274 | 0.8125 | 0.8127 |
| 0.3462 | 22.7 | 8400 | 0.4288 | 0.8082 | 0.8086 |
| 0.3417 | 23.24 | 8600 | 0.4225 | 0.8141 | 0.8142 |
| 0.3408 | 23.78 | 8800 | 0.4259 | 0.8131 | 0.8132 |
| 0.3422 | 24.32 | 9000 | 0.4230 | 0.8131 | 0.8132 |
| 0.3403 | 24.86 | 9200 | 0.4245 | 0.8137 | 0.8139 |
| 0.3457 | 25.41 | 9400 | 0.4242 | 0.8125 | 0.8127 |
| 0.3364 | 25.95 | 9600 | 0.4270 | 0.8134 | 0.8135 |
| 0.3418 | 26.49 | 9800 | 0.4278 | 0.8111 | 0.8113 |
| 0.3354 | 27.03 | 10000 | 0.4276 | 0.8120 | 0.8122 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:01:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_8192\_512\_30M-L32\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4067
* F1 Score: 0.8189
* Accuracy: 0.8189
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_iter_3
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_iter_3", "results": []}]} | ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:01:33+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_4iters_bs256_nodpo_only4w_iter_3
This model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| [
"# 0.001_4iters_bs256_nodpo_only4w_iter_3\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_4iters_bs256_nodpo_only4w_iter_3\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3822
- F1 Score: 0.8334
- Accuracy: 0.8334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5488 | 0.6 | 200 | 0.4335 | 0.7988 | 0.7995 |
| 0.4495 | 1.2 | 400 | 0.4023 | 0.8161 | 0.8161 |
| 0.43 | 1.81 | 600 | 0.3943 | 0.8239 | 0.8240 |
| 0.4301 | 2.41 | 800 | 0.3876 | 0.8230 | 0.8231 |
| 0.4157 | 3.01 | 1000 | 0.3833 | 0.8268 | 0.8268 |
| 0.4118 | 3.61 | 1200 | 0.3797 | 0.8285 | 0.8285 |
| 0.4131 | 4.22 | 1400 | 0.3810 | 0.8274 | 0.8274 |
| 0.4074 | 4.82 | 1600 | 0.3778 | 0.8303 | 0.8304 |
| 0.3984 | 5.42 | 1800 | 0.3786 | 0.8300 | 0.8300 |
| 0.4015 | 6.02 | 2000 | 0.3758 | 0.8295 | 0.8297 |
| 0.3992 | 6.63 | 2200 | 0.3731 | 0.8310 | 0.8310 |
| 0.3953 | 7.23 | 2400 | 0.3751 | 0.8323 | 0.8325 |
| 0.396 | 7.83 | 2600 | 0.3715 | 0.8319 | 0.8319 |
| 0.3882 | 8.43 | 2800 | 0.3801 | 0.8325 | 0.8329 |
| 0.394 | 9.04 | 3000 | 0.3714 | 0.8331 | 0.8332 |
| 0.3891 | 9.64 | 3200 | 0.3694 | 0.8327 | 0.8327 |
| 0.3915 | 10.24 | 3400 | 0.3700 | 0.8344 | 0.8344 |
| 0.3855 | 10.84 | 3600 | 0.3704 | 0.8359 | 0.8359 |
| 0.387 | 11.45 | 3800 | 0.3690 | 0.8347 | 0.8347 |
| 0.3841 | 12.05 | 4000 | 0.3679 | 0.8363 | 0.8363 |
| 0.3844 | 12.65 | 4200 | 0.3713 | 0.8364 | 0.8364 |
| 0.3836 | 13.25 | 4400 | 0.3693 | 0.8385 | 0.8385 |
| 0.3875 | 13.86 | 4600 | 0.3697 | 0.8368 | 0.8368 |
| 0.3828 | 14.46 | 4800 | 0.3696 | 0.8366 | 0.8366 |
| 0.3801 | 15.06 | 5000 | 0.3684 | 0.8362 | 0.8363 |
| 0.3785 | 15.66 | 5200 | 0.3678 | 0.8385 | 0.8385 |
| 0.3792 | 16.27 | 5400 | 0.3700 | 0.8368 | 0.8368 |
| 0.3788 | 16.87 | 5600 | 0.3746 | 0.8328 | 0.8332 |
| 0.3774 | 17.47 | 5800 | 0.3755 | 0.8335 | 0.8338 |
| 0.3827 | 18.07 | 6000 | 0.3699 | 0.8381 | 0.8383 |
| 0.3788 | 18.67 | 6200 | 0.3672 | 0.8374 | 0.8374 |
| 0.3824 | 19.28 | 6400 | 0.3672 | 0.8382 | 0.8381 |
| 0.3798 | 19.88 | 6600 | 0.3674 | 0.8375 | 0.8376 |
| 0.3732 | 20.48 | 6800 | 0.3686 | 0.8371 | 0.8372 |
| 0.384 | 21.08 | 7000 | 0.3672 | 0.8389 | 0.8391 |
| 0.3794 | 21.69 | 7200 | 0.3673 | 0.8379 | 0.8379 |
| 0.3723 | 22.29 | 7400 | 0.3667 | 0.8364 | 0.8364 |
| 0.3757 | 22.89 | 7600 | 0.3674 | 0.8394 | 0.8395 |
| 0.3773 | 23.49 | 7800 | 0.3658 | 0.8379 | 0.8379 |
| 0.3766 | 24.1 | 8000 | 0.3675 | 0.8373 | 0.8374 |
| 0.3772 | 24.7 | 8200 | 0.3696 | 0.8360 | 0.8363 |
| 0.3701 | 25.3 | 8400 | 0.3690 | 0.8382 | 0.8383 |
| 0.3789 | 25.9 | 8600 | 0.3663 | 0.8390 | 0.8391 |
| 0.3779 | 26.51 | 8800 | 0.3662 | 0.8383 | 0.8383 |
| 0.3753 | 27.11 | 9000 | 0.3665 | 0.8398 | 0.8398 |
| 0.3807 | 27.71 | 9200 | 0.3664 | 0.8386 | 0.8387 |
| 0.3687 | 28.31 | 9400 | 0.3675 | 0.8395 | 0.8396 |
| 0.3762 | 28.92 | 9600 | 0.3668 | 0.8385 | 0.8385 |
| 0.379 | 29.52 | 9800 | 0.3665 | 0.8388 | 0.8389 |
| 0.3675 | 30.12 | 10000 | 0.3668 | 0.8390 | 0.8391 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:03:27+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_8192\_512\_30M-L1\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3822
* F1 Score: 0.8334
* Accuracy: 0.8334
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # EinsteinBagel-8B (Einstein V6.1 & Bagel V1.0)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The purpose of this experiment was to combine the maximum amount of finetuned datasets possible for the Llama 3 8B architecture.
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B)
* [jondurbin/bagel-8b-v1.0](https://huggingface.co/jondurbin/bagel-8b-v1.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B
- model: jondurbin/bagel-8b-v1.0
- model: Weyaxi/Einstein-v6.1-Llama3-8B
merge_method: model_stock
base_model: meta-llama/Meta-Llama-3-8B
dtype: bfloat16
```
| {"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Weyaxi/Einstein-v6.1-Llama3-8B", "meta-llama/Meta-Llama-3-8B", "jondurbin/bagel-8b-v1.0"]} | TitleOS/EinsteinBagel-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:Weyaxi/Einstein-v6.1-Llama3-8B",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:jondurbin/bagel-8b-v1.0",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:04:04+00:00 | [
"2403.19522"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Weyaxi/Einstein-v6.1-Llama3-8B #base_model-meta-llama/Meta-Llama-3-8B #base_model-jondurbin/bagel-8b-v1.0 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # EinsteinBagel-8B (Einstein V6.1 & Bagel V1.0)
This is a merge of pre-trained language models created using mergekit.
The purpose of this experiment was to combine the maximum amount of finetuned datasets possible for the Llama 3 8B architecture.
## Merge Details
### Merge Method
This model was merged using the Model Stock merge method using meta-llama/Meta-Llama-3-8B as a base.
### Models Merged
The following models were included in the merge:
* Weyaxi/Einstein-v6.1-Llama3-8B
* jondurbin/bagel-8b-v1.0
### Configuration
The following YAML configuration was used to produce this model:
| [
"# EinsteinBagel-8B (Einstein V6.1 & Bagel V1.0)\n\nThis is a merge of pre-trained language models created using mergekit.\nThe purpose of this experiment was to combine the maximum amount of finetuned datasets possible for the Llama 3 8B architecture.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using meta-llama/Meta-Llama-3-8B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Weyaxi/Einstein-v6.1-Llama3-8B\n* jondurbin/bagel-8b-v1.0",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Weyaxi/Einstein-v6.1-Llama3-8B #base_model-meta-llama/Meta-Llama-3-8B #base_model-jondurbin/bagel-8b-v1.0 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# EinsteinBagel-8B (Einstein V6.1 & Bagel V1.0)\n\nThis is a merge of pre-trained language models created using mergekit.\nThe purpose of this experiment was to combine the maximum amount of finetuned datasets possible for the Llama 3 8B architecture.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using meta-llama/Meta-Llama-3-8B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Weyaxi/Einstein-v6.1-Llama3-8B\n* jondurbin/bagel-8b-v1.0",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "widget": [{"example_title": "OpenBioLLM-70B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-70B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-70B-5.0bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"conversational",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"region:us"
] | null | 2024-04-27T02:04:19+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-70B builds upon the powerful foundations of the Meta-Llama-3-70B-Instruct and Meta-Llama-3-70B-Instruct models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 70 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: Meta-Llama-3-70B-Instruct</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-70B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 8</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3812
- F1 Score: 0.8355
- Accuracy: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.513 | 0.6 | 200 | 0.3994 | 0.8211 | 0.8214 |
| 0.4186 | 1.2 | 400 | 0.3783 | 0.8323 | 0.8323 |
| 0.3998 | 1.81 | 600 | 0.3873 | 0.8288 | 0.8291 |
| 0.399 | 2.41 | 800 | 0.3745 | 0.8323 | 0.8325 |
| 0.39 | 3.01 | 1000 | 0.3702 | 0.8347 | 0.8347 |
| 0.3851 | 3.61 | 1200 | 0.3677 | 0.8353 | 0.8355 |
| 0.3879 | 4.22 | 1400 | 0.3709 | 0.8355 | 0.8355 |
| 0.3845 | 4.82 | 1600 | 0.3671 | 0.8353 | 0.8355 |
| 0.3747 | 5.42 | 1800 | 0.3700 | 0.8347 | 0.8347 |
| 0.379 | 6.02 | 2000 | 0.3664 | 0.8365 | 0.8366 |
| 0.3774 | 6.63 | 2200 | 0.3633 | 0.8391 | 0.8391 |
| 0.3737 | 7.23 | 2400 | 0.3662 | 0.8362 | 0.8364 |
| 0.3737 | 7.83 | 2600 | 0.3624 | 0.8398 | 0.8398 |
| 0.3687 | 8.43 | 2800 | 0.3687 | 0.8369 | 0.8372 |
| 0.3762 | 9.04 | 3000 | 0.3607 | 0.8415 | 0.8415 |
| 0.3677 | 9.64 | 3200 | 0.3623 | 0.8421 | 0.8421 |
| 0.3706 | 10.24 | 3400 | 0.3620 | 0.8412 | 0.8412 |
| 0.3642 | 10.84 | 3600 | 0.3611 | 0.8412 | 0.8412 |
| 0.3663 | 11.45 | 3800 | 0.3619 | 0.8419 | 0.8419 |
| 0.3639 | 12.05 | 4000 | 0.3606 | 0.8412 | 0.8412 |
| 0.3644 | 12.65 | 4200 | 0.3616 | 0.8420 | 0.8421 |
| 0.3622 | 13.25 | 4400 | 0.3607 | 0.8420 | 0.8421 |
| 0.3645 | 13.86 | 4600 | 0.3633 | 0.8426 | 0.8427 |
| 0.3606 | 14.46 | 4800 | 0.3621 | 0.8418 | 0.8419 |
| 0.36 | 15.06 | 5000 | 0.3583 | 0.8434 | 0.8434 |
| 0.354 | 15.66 | 5200 | 0.3607 | 0.8449 | 0.8449 |
| 0.3562 | 16.27 | 5400 | 0.3656 | 0.8427 | 0.8427 |
| 0.3576 | 16.87 | 5600 | 0.3634 | 0.8396 | 0.8400 |
| 0.3541 | 17.47 | 5800 | 0.3639 | 0.8402 | 0.8404 |
| 0.3597 | 18.07 | 6000 | 0.3610 | 0.8443 | 0.8444 |
| 0.356 | 18.67 | 6200 | 0.3595 | 0.8428 | 0.8428 |
| 0.3557 | 19.28 | 6400 | 0.3599 | 0.8440 | 0.8440 |
| 0.3536 | 19.88 | 6600 | 0.3583 | 0.8440 | 0.8440 |
| 0.349 | 20.48 | 6800 | 0.3602 | 0.8438 | 0.8438 |
| 0.3583 | 21.08 | 7000 | 0.3598 | 0.8416 | 0.8417 |
| 0.3538 | 21.69 | 7200 | 0.3628 | 0.8436 | 0.8436 |
| 0.3453 | 22.29 | 7400 | 0.3594 | 0.8405 | 0.8406 |
| 0.349 | 22.89 | 7600 | 0.3610 | 0.8403 | 0.8404 |
| 0.3513 | 23.49 | 7800 | 0.3585 | 0.8428 | 0.8428 |
| 0.3494 | 24.1 | 8000 | 0.3610 | 0.8407 | 0.8408 |
| 0.3513 | 24.7 | 8200 | 0.3615 | 0.8413 | 0.8415 |
| 0.3428 | 25.3 | 8400 | 0.3644 | 0.8414 | 0.8415 |
| 0.351 | 25.9 | 8600 | 0.3603 | 0.8434 | 0.8434 |
| 0.3489 | 26.51 | 8800 | 0.3614 | 0.8436 | 0.8436 |
| 0.3464 | 27.11 | 9000 | 0.3612 | 0.8417 | 0.8417 |
| 0.3532 | 27.71 | 9200 | 0.3609 | 0.8418 | 0.8419 |
| 0.3403 | 28.31 | 9400 | 0.3613 | 0.8415 | 0.8415 |
| 0.3481 | 28.92 | 9600 | 0.3613 | 0.8445 | 0.8445 |
| 0.3499 | 29.52 | 9800 | 0.3611 | 0.8439 | 0.8440 |
| 0.3406 | 30.12 | 10000 | 0.3613 | 0.8443 | 0.8444 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:09:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_8192\_512\_30M-L8\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3812
* F1 Score: 0.8355
* Accuracy: 0.8355
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | vicaloy/llama-2-13-b-chat-hf-checkpoints | null | [
"peft",
"region:us"
] | null | 2024-04-27T02:09:30+00:00 | [] | [] | TAGS
#peft #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3910
- F1 Score: 0.8342
- Accuracy: 0.8342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4915 | 0.6 | 200 | 0.3862 | 0.8258 | 0.8259 |
| 0.4066 | 1.2 | 400 | 0.3737 | 0.8340 | 0.8340 |
| 0.3919 | 1.81 | 600 | 0.3887 | 0.8283 | 0.8287 |
| 0.3899 | 2.41 | 800 | 0.3694 | 0.8350 | 0.8351 |
| 0.3828 | 3.01 | 1000 | 0.3681 | 0.8373 | 0.8376 |
| 0.3764 | 3.61 | 1200 | 0.3621 | 0.8380 | 0.8381 |
| 0.3795 | 4.22 | 1400 | 0.3674 | 0.8364 | 0.8364 |
| 0.3758 | 4.82 | 1600 | 0.3637 | 0.8381 | 0.8383 |
| 0.3635 | 5.42 | 1800 | 0.3703 | 0.8365 | 0.8366 |
| 0.3702 | 6.02 | 2000 | 0.3708 | 0.8381 | 0.8385 |
| 0.3646 | 6.63 | 2200 | 0.3592 | 0.8432 | 0.8432 |
| 0.3602 | 7.23 | 2400 | 0.3715 | 0.8348 | 0.8353 |
| 0.3591 | 7.83 | 2600 | 0.3618 | 0.8429 | 0.8428 |
| 0.3521 | 8.43 | 2800 | 0.3645 | 0.8420 | 0.8423 |
| 0.36 | 9.04 | 3000 | 0.3615 | 0.8442 | 0.8444 |
| 0.3498 | 9.64 | 3200 | 0.3618 | 0.8451 | 0.8451 |
| 0.3517 | 10.24 | 3400 | 0.3621 | 0.8456 | 0.8457 |
| 0.345 | 10.84 | 3600 | 0.3601 | 0.8451 | 0.8451 |
| 0.3444 | 11.45 | 3800 | 0.3621 | 0.8443 | 0.8444 |
| 0.3446 | 12.05 | 4000 | 0.3584 | 0.8463 | 0.8462 |
| 0.3399 | 12.65 | 4200 | 0.3628 | 0.8474 | 0.8474 |
| 0.3389 | 13.25 | 4400 | 0.3635 | 0.8465 | 0.8466 |
| 0.3401 | 13.86 | 4600 | 0.3670 | 0.8447 | 0.8447 |
| 0.3351 | 14.46 | 4800 | 0.3680 | 0.8433 | 0.8436 |
| 0.3337 | 15.06 | 5000 | 0.3601 | 0.8449 | 0.8449 |
| 0.3251 | 15.66 | 5200 | 0.3699 | 0.8470 | 0.8470 |
| 0.3292 | 16.27 | 5400 | 0.3756 | 0.8483 | 0.8483 |
| 0.3268 | 16.87 | 5600 | 0.3730 | 0.8413 | 0.8417 |
| 0.3225 | 17.47 | 5800 | 0.3712 | 0.8429 | 0.8432 |
| 0.3296 | 18.07 | 6000 | 0.3713 | 0.8455 | 0.8457 |
| 0.3239 | 18.67 | 6200 | 0.3693 | 0.8451 | 0.8451 |
| 0.3218 | 19.28 | 6400 | 0.3720 | 0.8469 | 0.8470 |
| 0.3204 | 19.88 | 6600 | 0.3639 | 0.8440 | 0.8440 |
| 0.3136 | 20.48 | 6800 | 0.3698 | 0.8488 | 0.8489 |
| 0.3241 | 21.08 | 7000 | 0.3692 | 0.8457 | 0.8459 |
| 0.3188 | 21.69 | 7200 | 0.3751 | 0.8468 | 0.8468 |
| 0.3107 | 22.29 | 7400 | 0.3699 | 0.8426 | 0.8428 |
| 0.3123 | 22.89 | 7600 | 0.3725 | 0.8425 | 0.8427 |
| 0.3143 | 23.49 | 7800 | 0.3703 | 0.8456 | 0.8457 |
| 0.3126 | 24.1 | 8000 | 0.3763 | 0.8428 | 0.8430 |
| 0.3145 | 24.7 | 8200 | 0.3763 | 0.8408 | 0.8412 |
| 0.3051 | 25.3 | 8400 | 0.3814 | 0.8430 | 0.8432 |
| 0.3089 | 25.9 | 8600 | 0.3747 | 0.8429 | 0.8430 |
| 0.3074 | 26.51 | 8800 | 0.3758 | 0.8449 | 0.8449 |
| 0.3069 | 27.11 | 9000 | 0.3746 | 0.8437 | 0.8438 |
| 0.3115 | 27.71 | 9200 | 0.3769 | 0.8419 | 0.8421 |
| 0.3003 | 28.31 | 9400 | 0.3779 | 0.8413 | 0.8415 |
| 0.3042 | 28.92 | 9600 | 0.3785 | 0.8408 | 0.8410 |
| 0.3067 | 29.52 | 9800 | 0.3772 | 0.8412 | 0.8413 |
| 0.298 | 30.12 | 10000 | 0.3778 | 0.8411 | 0.8412 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:10:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_8192\_512\_30M-L32\_f
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3910
* F1 Score: 0.8342
* Accuracy: 0.8342
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1924
- Accuracy: 0.9529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.198 | 1.0 | 1563 | 0.1405 | 0.9501 |
| 0.1304 | 2.0 | 3126 | 0.1924 | 0.9529 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "roberta-base", "model-index": [{"name": "my_awesome_model", "results": []}]} | brunhild217/my_awesome_model | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T02:12:24+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_model
==================
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1924
* Accuracy: 0.9529
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4139
- F1 Score: 0.8204
- Accuracy: 0.8206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.598 | 5.13 | 200 | 0.5809 | 0.6855 | 0.6884 |
| 0.5478 | 10.26 | 400 | 0.5718 | 0.6949 | 0.6982 |
| 0.5292 | 15.38 | 600 | 0.5582 | 0.7134 | 0.7178 |
| 0.5098 | 20.51 | 800 | 0.5347 | 0.7385 | 0.7406 |
| 0.4956 | 25.64 | 1000 | 0.5128 | 0.7385 | 0.7390 |
| 0.4841 | 30.77 | 1200 | 0.4952 | 0.7504 | 0.7504 |
| 0.4687 | 35.9 | 1400 | 0.4978 | 0.7327 | 0.7341 |
| 0.4547 | 41.03 | 1600 | 0.4727 | 0.7731 | 0.7732 |
| 0.4453 | 46.15 | 1800 | 0.4649 | 0.7664 | 0.7667 |
| 0.427 | 51.28 | 2000 | 0.4587 | 0.7757 | 0.7765 |
| 0.4099 | 56.41 | 2200 | 0.4288 | 0.8010 | 0.8010 |
| 0.3969 | 61.54 | 2400 | 0.4209 | 0.8091 | 0.8091 |
| 0.3875 | 66.67 | 2600 | 0.4110 | 0.8091 | 0.8091 |
| 0.376 | 71.79 | 2800 | 0.4023 | 0.8237 | 0.8238 |
| 0.364 | 76.92 | 3000 | 0.4107 | 0.8087 | 0.8091 |
| 0.3553 | 82.05 | 3200 | 0.3894 | 0.8270 | 0.8271 |
| 0.3515 | 87.18 | 3400 | 0.4001 | 0.8250 | 0.8254 |
| 0.3446 | 92.31 | 3600 | 0.3743 | 0.8385 | 0.8385 |
| 0.3406 | 97.44 | 3800 | 0.3740 | 0.8385 | 0.8385 |
| 0.3333 | 102.56 | 4000 | 0.3718 | 0.8434 | 0.8434 |
| 0.3285 | 107.69 | 4200 | 0.3647 | 0.8450 | 0.8450 |
| 0.3212 | 112.82 | 4400 | 0.3744 | 0.8450 | 0.8450 |
| 0.3183 | 117.95 | 4600 | 0.3704 | 0.8401 | 0.8401 |
| 0.319 | 123.08 | 4800 | 0.3620 | 0.8418 | 0.8418 |
| 0.3103 | 128.21 | 5000 | 0.3592 | 0.8433 | 0.8434 |
| 0.3078 | 133.33 | 5200 | 0.3610 | 0.8449 | 0.8450 |
| 0.3047 | 138.46 | 5400 | 0.3588 | 0.8401 | 0.8401 |
| 0.2994 | 143.59 | 5600 | 0.3714 | 0.8417 | 0.8418 |
| 0.3022 | 148.72 | 5800 | 0.3612 | 0.8401 | 0.8401 |
| 0.2987 | 153.85 | 6000 | 0.3610 | 0.8417 | 0.8418 |
| 0.2974 | 158.97 | 6200 | 0.3615 | 0.8450 | 0.8450 |
| 0.2886 | 164.1 | 6400 | 0.3628 | 0.8401 | 0.8401 |
| 0.2889 | 169.23 | 6600 | 0.3660 | 0.8466 | 0.8467 |
| 0.2857 | 174.36 | 6800 | 0.3690 | 0.8385 | 0.8385 |
| 0.2831 | 179.49 | 7000 | 0.3646 | 0.8368 | 0.8369 |
| 0.2834 | 184.62 | 7200 | 0.3754 | 0.8351 | 0.8352 |
| 0.284 | 189.74 | 7400 | 0.3658 | 0.8368 | 0.8369 |
| 0.2808 | 194.87 | 7600 | 0.3645 | 0.8417 | 0.8418 |
| 0.2754 | 200.0 | 7800 | 0.3691 | 0.8385 | 0.8385 |
| 0.2791 | 205.13 | 8000 | 0.3706 | 0.8352 | 0.8352 |
| 0.2722 | 210.26 | 8200 | 0.3687 | 0.8385 | 0.8385 |
| 0.2798 | 215.38 | 8400 | 0.3682 | 0.8369 | 0.8369 |
| 0.2711 | 220.51 | 8600 | 0.3713 | 0.8352 | 0.8352 |
| 0.2763 | 225.64 | 8800 | 0.3715 | 0.8320 | 0.8320 |
| 0.2728 | 230.77 | 9000 | 0.3705 | 0.8352 | 0.8352 |
| 0.2712 | 235.9 | 9200 | 0.3733 | 0.8352 | 0.8352 |
| 0.2758 | 241.03 | 9400 | 0.3685 | 0.8369 | 0.8369 |
| 0.2707 | 246.15 | 9600 | 0.3707 | 0.8385 | 0.8385 |
| 0.2787 | 251.28 | 9800 | 0.3701 | 0.8369 | 0.8369 |
| 0.2751 | 256.41 | 10000 | 0.3692 | 0.8385 | 0.8385 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:12:51+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_8192\_512\_30M-L1\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4139
* F1 Score: 0.8204
* Accuracy: 0.8206
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5952
- F1 Score: 0.8385
- Accuracy: 0.8385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5745 | 5.13 | 200 | 0.5589 | 0.7076 | 0.7096 |
| 0.5057 | 10.26 | 400 | 0.5507 | 0.7268 | 0.7325 |
| 0.4671 | 15.38 | 600 | 0.5074 | 0.7501 | 0.7537 |
| 0.4179 | 20.51 | 800 | 0.4490 | 0.7924 | 0.7928 |
| 0.3745 | 25.64 | 1000 | 0.4288 | 0.8152 | 0.8157 |
| 0.3408 | 30.77 | 1200 | 0.4045 | 0.8205 | 0.8206 |
| 0.3141 | 35.9 | 1400 | 0.3903 | 0.8189 | 0.8189 |
| 0.2929 | 41.03 | 1600 | 0.3908 | 0.8271 | 0.8271 |
| 0.2779 | 46.15 | 1800 | 0.3883 | 0.8287 | 0.8287 |
| 0.257 | 51.28 | 2000 | 0.3939 | 0.8368 | 0.8369 |
| 0.2411 | 56.41 | 2200 | 0.3823 | 0.8368 | 0.8369 |
| 0.2262 | 61.54 | 2400 | 0.4157 | 0.8351 | 0.8352 |
| 0.2149 | 66.67 | 2600 | 0.3981 | 0.8432 | 0.8434 |
| 0.2062 | 71.79 | 2800 | 0.4350 | 0.8398 | 0.8401 |
| 0.1949 | 76.92 | 3000 | 0.4541 | 0.8303 | 0.8303 |
| 0.1832 | 82.05 | 3200 | 0.4548 | 0.8303 | 0.8303 |
| 0.1807 | 87.18 | 3400 | 0.4484 | 0.8287 | 0.8287 |
| 0.1739 | 92.31 | 3600 | 0.4368 | 0.8336 | 0.8336 |
| 0.1616 | 97.44 | 3800 | 0.4746 | 0.8385 | 0.8385 |
| 0.153 | 102.56 | 4000 | 0.4779 | 0.8369 | 0.8369 |
| 0.1485 | 107.69 | 4200 | 0.4921 | 0.8385 | 0.8385 |
| 0.1405 | 112.82 | 4400 | 0.5096 | 0.8450 | 0.8450 |
| 0.1405 | 117.95 | 4600 | 0.4949 | 0.8482 | 0.8483 |
| 0.1328 | 123.08 | 4800 | 0.5242 | 0.8434 | 0.8434 |
| 0.1292 | 128.21 | 5000 | 0.5093 | 0.8254 | 0.8254 |
| 0.1229 | 133.33 | 5200 | 0.5286 | 0.8563 | 0.8564 |
| 0.1205 | 138.46 | 5400 | 0.5364 | 0.8449 | 0.8450 |
| 0.1141 | 143.59 | 5600 | 0.5838 | 0.8303 | 0.8303 |
| 0.1142 | 148.72 | 5800 | 0.5555 | 0.8336 | 0.8336 |
| 0.1102 | 153.85 | 6000 | 0.5760 | 0.8416 | 0.8418 |
| 0.1064 | 158.97 | 6200 | 0.5708 | 0.8320 | 0.8320 |
| 0.1065 | 164.1 | 6400 | 0.5825 | 0.8418 | 0.8418 |
| 0.1018 | 169.23 | 6600 | 0.5989 | 0.8336 | 0.8336 |
| 0.0973 | 174.36 | 6800 | 0.6020 | 0.8336 | 0.8336 |
| 0.0965 | 179.49 | 7000 | 0.5881 | 0.8401 | 0.8401 |
| 0.0931 | 184.62 | 7200 | 0.6173 | 0.8302 | 0.8303 |
| 0.0908 | 189.74 | 7400 | 0.5950 | 0.8401 | 0.8401 |
| 0.0929 | 194.87 | 7600 | 0.6094 | 0.8465 | 0.8467 |
| 0.0885 | 200.0 | 7800 | 0.5925 | 0.8418 | 0.8418 |
| 0.0883 | 205.13 | 8000 | 0.6153 | 0.8418 | 0.8418 |
| 0.0819 | 210.26 | 8200 | 0.6215 | 0.8499 | 0.8499 |
| 0.082 | 215.38 | 8400 | 0.6393 | 0.8499 | 0.8499 |
| 0.0794 | 220.51 | 8600 | 0.6357 | 0.8368 | 0.8369 |
| 0.0816 | 225.64 | 8800 | 0.6123 | 0.8483 | 0.8483 |
| 0.0864 | 230.77 | 9000 | 0.6061 | 0.8417 | 0.8418 |
| 0.083 | 235.9 | 9200 | 0.6174 | 0.8418 | 0.8418 |
| 0.0822 | 241.03 | 9400 | 0.6227 | 0.8466 | 0.8467 |
| 0.0786 | 246.15 | 9600 | 0.6303 | 0.8434 | 0.8434 |
| 0.084 | 251.28 | 9800 | 0.6274 | 0.8450 | 0.8450 |
| 0.0781 | 256.41 | 10000 | 0.6283 | 0.8450 | 0.8450 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:13:31+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_8192\_512\_30M-L8\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5952
* F1 Score: 0.8385
* Accuracy: 0.8385
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "datasets": ["thesven/pihkal-tihkal-orpo-dpo-dataset"]} | thesven/thesven-Tihkal-Pihkal-ORPO-Llama-3-8B-bnb-4bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:thesven/pihkal-tihkal-orpo-dpo-dataset",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:13:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #dataset-thesven/pihkal-tihkal-orpo-dpo-dataset #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #dataset-thesven/pihkal-tihkal-orpo-dpo-dataset #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmd-4bars-2048-epochs10
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0
- Datasets 2.15.0
- Tokenizers 0.15.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "lmd-4bars-2048-epochs10", "results": []}]} | MikeMpapa/lmd-4bars-2048-epochs10 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:16:15+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# lmd-4bars-2048-epochs10
This model is a fine-tuned version of gpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0
- Datasets 2.15.0
- Tokenizers 0.15.1
| [
"# lmd-4bars-2048-epochs10\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.005\n- train_batch_size: 32\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.0\n- Pytorch 2.1.0\n- Datasets 2.15.0\n- Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# lmd-4bars-2048-epochs10\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.005\n- train_batch_size: 32\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.0\n- Pytorch 2.1.0\n- Datasets 2.15.0\n- Tokenizers 0.15.1"
] |
text-to-image | null |
gguf quantized version of https://huggingface.co/stabilityai/sdxl-turbo | {"license": "other", "license_name": "sai-nc-community", "license_link": "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT", "pipeline_tag": "text-to-image"} | OlegSkutte/sdxl-turbo-GGUF | null | [
"gguf",
"text-to-image",
"license:other",
"region:us"
] | null | 2024-04-27T02:18:36+00:00 | [] | [] | TAGS
#gguf #text-to-image #license-other #region-us
|
gguf quantized version of URL | [] | [
"TAGS\n#gguf #text-to-image #license-other #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7455
- F1 Score: 0.8269
- Accuracy: 0.8271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5591 | 5.13 | 200 | 0.5316 | 0.7337 | 0.7341 |
| 0.4678 | 10.26 | 400 | 0.4843 | 0.7821 | 0.7830 |
| 0.3819 | 15.38 | 600 | 0.4105 | 0.8188 | 0.8189 |
| 0.3145 | 20.51 | 800 | 0.4078 | 0.8173 | 0.8173 |
| 0.2727 | 25.64 | 1000 | 0.4044 | 0.8352 | 0.8352 |
| 0.2292 | 30.77 | 1200 | 0.4325 | 0.8384 | 0.8385 |
| 0.1985 | 35.9 | 1400 | 0.4151 | 0.8401 | 0.8401 |
| 0.1666 | 41.03 | 1600 | 0.4526 | 0.8399 | 0.8401 |
| 0.1497 | 46.15 | 1800 | 0.5059 | 0.8400 | 0.8401 |
| 0.1277 | 51.28 | 2000 | 0.5027 | 0.8483 | 0.8483 |
| 0.1108 | 56.41 | 2200 | 0.5648 | 0.8596 | 0.8597 |
| 0.0988 | 61.54 | 2400 | 0.5330 | 0.8483 | 0.8483 |
| 0.091 | 66.67 | 2600 | 0.5492 | 0.8449 | 0.8450 |
| 0.0815 | 71.79 | 2800 | 0.5692 | 0.8645 | 0.8646 |
| 0.0714 | 76.92 | 3000 | 0.5980 | 0.8532 | 0.8532 |
| 0.065 | 82.05 | 3200 | 0.6359 | 0.8464 | 0.8467 |
| 0.0599 | 87.18 | 3400 | 0.6645 | 0.8497 | 0.8499 |
| 0.0563 | 92.31 | 3600 | 0.6828 | 0.8515 | 0.8515 |
| 0.055 | 97.44 | 3800 | 0.6676 | 0.8450 | 0.8450 |
| 0.0465 | 102.56 | 4000 | 0.7496 | 0.8412 | 0.8418 |
| 0.047 | 107.69 | 4200 | 0.7358 | 0.8432 | 0.8434 |
| 0.0448 | 112.82 | 4400 | 0.7372 | 0.8329 | 0.8336 |
| 0.0436 | 117.95 | 4600 | 0.6990 | 0.8400 | 0.8401 |
| 0.039 | 123.08 | 4800 | 0.7425 | 0.8433 | 0.8434 |
| 0.0395 | 128.21 | 5000 | 0.7357 | 0.8315 | 0.8320 |
| 0.0369 | 133.33 | 5200 | 0.7054 | 0.8483 | 0.8483 |
| 0.0352 | 138.46 | 5400 | 0.7598 | 0.8382 | 0.8385 |
| 0.0318 | 143.59 | 5600 | 0.7746 | 0.8465 | 0.8467 |
| 0.0315 | 148.72 | 5800 | 0.7480 | 0.8400 | 0.8401 |
| 0.0309 | 153.85 | 6000 | 0.7548 | 0.8563 | 0.8564 |
| 0.0312 | 158.97 | 6200 | 0.7342 | 0.8548 | 0.8548 |
| 0.0269 | 164.1 | 6400 | 0.7992 | 0.8433 | 0.8434 |
| 0.03 | 169.23 | 6600 | 0.7064 | 0.8417 | 0.8418 |
| 0.0268 | 174.36 | 6800 | 0.7526 | 0.8498 | 0.8499 |
| 0.026 | 179.49 | 7000 | 0.7874 | 0.8499 | 0.8499 |
| 0.0263 | 184.62 | 7200 | 0.8341 | 0.8378 | 0.8385 |
| 0.0224 | 189.74 | 7400 | 0.8513 | 0.8512 | 0.8515 |
| 0.0237 | 194.87 | 7600 | 0.7960 | 0.8579 | 0.8581 |
| 0.0226 | 200.0 | 7800 | 0.8098 | 0.8515 | 0.8515 |
| 0.021 | 205.13 | 8000 | 0.8648 | 0.8480 | 0.8483 |
| 0.0212 | 210.26 | 8200 | 0.8669 | 0.8381 | 0.8385 |
| 0.0193 | 215.38 | 8400 | 0.8712 | 0.8380 | 0.8385 |
| 0.0186 | 220.51 | 8600 | 0.8453 | 0.8448 | 0.8450 |
| 0.0194 | 225.64 | 8800 | 0.8293 | 0.8432 | 0.8434 |
| 0.0196 | 230.77 | 9000 | 0.8284 | 0.8415 | 0.8418 |
| 0.0197 | 235.9 | 9200 | 0.8125 | 0.8597 | 0.8597 |
| 0.0186 | 241.03 | 9400 | 0.8294 | 0.8547 | 0.8548 |
| 0.0199 | 246.15 | 9600 | 0.8236 | 0.8514 | 0.8515 |
| 0.0191 | 251.28 | 9800 | 0.8482 | 0.8398 | 0.8401 |
| 0.0164 | 256.41 | 10000 | 0.8305 | 0.8514 | 0.8515 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:18:53+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_8192\_512\_30M-L32\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7455
* F1 Score: 0.8269
* Accuracy: 0.8271
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- F1 Score: 0.9138
- Accuracy: 0.9139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4209 | 0.54 | 200 | 0.2826 | 0.8948 | 0.8949 |
| 0.2891 | 1.08 | 400 | 0.2521 | 0.9057 | 0.9057 |
| 0.2726 | 1.62 | 600 | 0.2326 | 0.9108 | 0.9108 |
| 0.2489 | 2.16 | 800 | 0.2317 | 0.9107 | 0.9108 |
| 0.2486 | 2.7 | 1000 | 0.2195 | 0.9150 | 0.9150 |
| 0.2368 | 3.24 | 1200 | 0.2289 | 0.9108 | 0.9110 |
| 0.2414 | 3.78 | 1400 | 0.2178 | 0.9141 | 0.9142 |
| 0.2329 | 4.32 | 1600 | 0.2107 | 0.9184 | 0.9184 |
| 0.2256 | 4.86 | 1800 | 0.2147 | 0.9177 | 0.9177 |
| 0.2248 | 5.41 | 2000 | 0.2083 | 0.9184 | 0.9184 |
| 0.2247 | 5.95 | 2200 | 0.2053 | 0.9202 | 0.9203 |
| 0.22 | 6.49 | 2400 | 0.2026 | 0.9196 | 0.9196 |
| 0.2232 | 7.03 | 2600 | 0.2044 | 0.9186 | 0.9186 |
| 0.2157 | 7.57 | 2800 | 0.1997 | 0.9208 | 0.9208 |
| 0.2165 | 8.11 | 3000 | 0.2005 | 0.9204 | 0.9204 |
| 0.2162 | 8.65 | 3200 | 0.1976 | 0.9220 | 0.9220 |
| 0.2082 | 9.19 | 3400 | 0.1986 | 0.9208 | 0.9208 |
| 0.2079 | 9.73 | 3600 | 0.1956 | 0.9201 | 0.9201 |
| 0.218 | 10.27 | 3800 | 0.1952 | 0.9225 | 0.9225 |
| 0.2089 | 10.81 | 4000 | 0.1957 | 0.9218 | 0.9218 |
| 0.2111 | 11.35 | 4200 | 0.1959 | 0.9228 | 0.9228 |
| 0.2048 | 11.89 | 4400 | 0.1960 | 0.9221 | 0.9221 |
| 0.2042 | 12.43 | 4600 | 0.1960 | 0.9219 | 0.9220 |
| 0.2072 | 12.97 | 4800 | 0.1928 | 0.9233 | 0.9233 |
| 0.2049 | 13.51 | 5000 | 0.1928 | 0.9255 | 0.9255 |
| 0.2069 | 14.05 | 5200 | 0.1915 | 0.9243 | 0.9243 |
| 0.2033 | 14.59 | 5400 | 0.1937 | 0.9231 | 0.9231 |
| 0.2008 | 15.14 | 5600 | 0.1908 | 0.9257 | 0.9257 |
| 0.1998 | 15.68 | 5800 | 0.1903 | 0.9236 | 0.9236 |
| 0.2012 | 16.22 | 6000 | 0.1936 | 0.9241 | 0.9242 |
| 0.1995 | 16.76 | 6200 | 0.1915 | 0.9240 | 0.9240 |
| 0.1995 | 17.3 | 6400 | 0.1917 | 0.9231 | 0.9231 |
| 0.2031 | 17.84 | 6600 | 0.1927 | 0.9223 | 0.9223 |
| 0.1963 | 18.38 | 6800 | 0.1902 | 0.9231 | 0.9231 |
| 0.2017 | 18.92 | 7000 | 0.1890 | 0.9243 | 0.9243 |
| 0.1999 | 19.46 | 7200 | 0.1919 | 0.9220 | 0.9220 |
| 0.1967 | 20.0 | 7400 | 0.1894 | 0.9257 | 0.9257 |
| 0.1982 | 20.54 | 7600 | 0.1904 | 0.9231 | 0.9231 |
| 0.196 | 21.08 | 7800 | 0.1885 | 0.9247 | 0.9247 |
| 0.1983 | 21.62 | 8000 | 0.1895 | 0.9240 | 0.9240 |
| 0.1957 | 22.16 | 8200 | 0.1890 | 0.9223 | 0.9223 |
| 0.1901 | 22.7 | 8400 | 0.1899 | 0.9233 | 0.9233 |
| 0.1985 | 23.24 | 8600 | 0.1881 | 0.9242 | 0.9242 |
| 0.196 | 23.78 | 8800 | 0.1883 | 0.9250 | 0.925 |
| 0.1977 | 24.32 | 9000 | 0.1893 | 0.9223 | 0.9223 |
| 0.1935 | 24.86 | 9200 | 0.1891 | 0.9240 | 0.9240 |
| 0.1956 | 25.41 | 9400 | 0.1891 | 0.9228 | 0.9228 |
| 0.1958 | 25.95 | 9600 | 0.1889 | 0.9231 | 0.9231 |
| 0.1953 | 26.49 | 9800 | 0.1884 | 0.9235 | 0.9235 |
| 0.1921 | 27.03 | 10000 | 0.1885 | 0.9236 | 0.9236 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:19:30+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_8192\_512\_30M-L1\_f
========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2147
* F1 Score: 0.9138
* Accuracy: 0.9139
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1981
- F1 Score: 0.9179
- Accuracy: 0.9179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3652 | 0.54 | 200 | 0.2431 | 0.9059 | 0.9061 |
| 0.2518 | 1.08 | 400 | 0.2259 | 0.9150 | 0.9150 |
| 0.2395 | 1.62 | 600 | 0.2077 | 0.9147 | 0.9147 |
| 0.2208 | 2.16 | 800 | 0.2165 | 0.9150 | 0.9152 |
| 0.2197 | 2.7 | 1000 | 0.2012 | 0.9199 | 0.9199 |
| 0.213 | 3.24 | 1200 | 0.2059 | 0.9170 | 0.9171 |
| 0.2124 | 3.78 | 1400 | 0.1955 | 0.9209 | 0.9209 |
| 0.2061 | 4.32 | 1600 | 0.1899 | 0.9245 | 0.9245 |
| 0.2024 | 4.86 | 1800 | 0.1930 | 0.9215 | 0.9215 |
| 0.1987 | 5.41 | 2000 | 0.1909 | 0.9250 | 0.925 |
| 0.199 | 5.95 | 2200 | 0.1846 | 0.9267 | 0.9267 |
| 0.195 | 6.49 | 2400 | 0.1826 | 0.9262 | 0.9262 |
| 0.1985 | 7.03 | 2600 | 0.1850 | 0.9263 | 0.9264 |
| 0.1897 | 7.57 | 2800 | 0.1824 | 0.9262 | 0.9262 |
| 0.1911 | 8.11 | 3000 | 0.1824 | 0.9258 | 0.9258 |
| 0.188 | 8.65 | 3200 | 0.1828 | 0.9270 | 0.9270 |
| 0.1835 | 9.19 | 3400 | 0.1826 | 0.9265 | 0.9265 |
| 0.1808 | 9.73 | 3600 | 0.1804 | 0.9243 | 0.9243 |
| 0.1887 | 10.27 | 3800 | 0.1804 | 0.9277 | 0.9277 |
| 0.1817 | 10.81 | 4000 | 0.1787 | 0.9274 | 0.9274 |
| 0.1826 | 11.35 | 4200 | 0.1786 | 0.9270 | 0.9270 |
| 0.1772 | 11.89 | 4400 | 0.1835 | 0.9275 | 0.9275 |
| 0.1766 | 12.43 | 4600 | 0.1770 | 0.9279 | 0.9279 |
| 0.179 | 12.97 | 4800 | 0.1770 | 0.9260 | 0.9260 |
| 0.1773 | 13.51 | 5000 | 0.1775 | 0.9280 | 0.9280 |
| 0.1775 | 14.05 | 5200 | 0.1781 | 0.9291 | 0.9291 |
| 0.1723 | 14.59 | 5400 | 0.1771 | 0.9282 | 0.9282 |
| 0.1708 | 15.14 | 5600 | 0.1747 | 0.9270 | 0.9270 |
| 0.1704 | 15.68 | 5800 | 0.1761 | 0.9267 | 0.9267 |
| 0.1719 | 16.22 | 6000 | 0.1781 | 0.9269 | 0.9269 |
| 0.1687 | 16.76 | 6200 | 0.1781 | 0.9270 | 0.9270 |
| 0.1692 | 17.3 | 6400 | 0.1740 | 0.9277 | 0.9277 |
| 0.1729 | 17.84 | 6600 | 0.1761 | 0.9270 | 0.9270 |
| 0.1656 | 18.38 | 6800 | 0.1762 | 0.9272 | 0.9272 |
| 0.1682 | 18.92 | 7000 | 0.1732 | 0.9275 | 0.9275 |
| 0.1681 | 19.46 | 7200 | 0.1752 | 0.9284 | 0.9284 |
| 0.165 | 20.0 | 7400 | 0.1753 | 0.9275 | 0.9275 |
| 0.1674 | 20.54 | 7600 | 0.1771 | 0.9284 | 0.9284 |
| 0.1623 | 21.08 | 7800 | 0.1734 | 0.9277 | 0.9277 |
| 0.165 | 21.62 | 8000 | 0.1741 | 0.9282 | 0.9282 |
| 0.1638 | 22.16 | 8200 | 0.1746 | 0.9282 | 0.9282 |
| 0.1576 | 22.7 | 8400 | 0.1730 | 0.9269 | 0.9269 |
| 0.1639 | 23.24 | 8600 | 0.1723 | 0.9277 | 0.9277 |
| 0.1611 | 23.78 | 8800 | 0.1723 | 0.9279 | 0.9279 |
| 0.1634 | 24.32 | 9000 | 0.1739 | 0.9275 | 0.9275 |
| 0.1611 | 24.86 | 9200 | 0.1736 | 0.9280 | 0.9280 |
| 0.1605 | 25.41 | 9400 | 0.1737 | 0.9277 | 0.9277 |
| 0.1606 | 25.95 | 9600 | 0.1736 | 0.9279 | 0.9279 |
| 0.1608 | 26.49 | 9800 | 0.1731 | 0.9277 | 0.9277 |
| 0.1561 | 27.03 | 10000 | 0.1733 | 0.9282 | 0.9282 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:21:00+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_8192\_512\_30M-L8\_f
========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1981
* F1 Score: 0.9179
* Accuracy: 0.9179
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1998
- F1 Score: 0.9263
- Accuracy: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.335 | 0.54 | 200 | 0.2328 | 0.9088 | 0.9090 |
| 0.2383 | 1.08 | 400 | 0.2138 | 0.9173 | 0.9174 |
| 0.2261 | 1.62 | 600 | 0.2054 | 0.9181 | 0.9181 |
| 0.2106 | 2.16 | 800 | 0.2112 | 0.9137 | 0.9139 |
| 0.207 | 2.7 | 1000 | 0.1931 | 0.9231 | 0.9231 |
| 0.2012 | 3.24 | 1200 | 0.1990 | 0.9176 | 0.9177 |
| 0.199 | 3.78 | 1400 | 0.1928 | 0.9217 | 0.9218 |
| 0.1932 | 4.32 | 1600 | 0.1845 | 0.9275 | 0.9275 |
| 0.1893 | 4.86 | 1800 | 0.1827 | 0.9252 | 0.9252 |
| 0.1829 | 5.41 | 2000 | 0.1863 | 0.9285 | 0.9285 |
| 0.1845 | 5.95 | 2200 | 0.1758 | 0.9284 | 0.9284 |
| 0.1774 | 6.49 | 2400 | 0.1849 | 0.9241 | 0.9242 |
| 0.1817 | 7.03 | 2600 | 0.1774 | 0.9297 | 0.9297 |
| 0.1722 | 7.57 | 2800 | 0.1769 | 0.9292 | 0.9292 |
| 0.1714 | 8.11 | 3000 | 0.1770 | 0.9297 | 0.9297 |
| 0.1668 | 8.65 | 3200 | 0.1790 | 0.9302 | 0.9302 |
| 0.164 | 9.19 | 3400 | 0.1735 | 0.9314 | 0.9314 |
| 0.1586 | 9.73 | 3600 | 0.1751 | 0.9267 | 0.9267 |
| 0.1659 | 10.27 | 3800 | 0.1731 | 0.9299 | 0.9299 |
| 0.1589 | 10.81 | 4000 | 0.1719 | 0.9312 | 0.9313 |
| 0.1573 | 11.35 | 4200 | 0.1701 | 0.9340 | 0.9340 |
| 0.154 | 11.89 | 4400 | 0.1762 | 0.9328 | 0.9328 |
| 0.1515 | 12.43 | 4600 | 0.1761 | 0.9299 | 0.9299 |
| 0.1524 | 12.97 | 4800 | 0.1723 | 0.9277 | 0.9277 |
| 0.1486 | 13.51 | 5000 | 0.1774 | 0.9300 | 0.9301 |
| 0.1499 | 14.05 | 5200 | 0.1705 | 0.9323 | 0.9323 |
| 0.1437 | 14.59 | 5400 | 0.1677 | 0.9323 | 0.9323 |
| 0.1413 | 15.14 | 5600 | 0.1669 | 0.9355 | 0.9355 |
| 0.1379 | 15.68 | 5800 | 0.1682 | 0.9326 | 0.9326 |
| 0.1387 | 16.22 | 6000 | 0.1686 | 0.9351 | 0.9351 |
| 0.1355 | 16.76 | 6200 | 0.1750 | 0.9312 | 0.9313 |
| 0.1374 | 17.3 | 6400 | 0.1695 | 0.9338 | 0.9338 |
| 0.1385 | 17.84 | 6600 | 0.1684 | 0.9329 | 0.9329 |
| 0.1298 | 18.38 | 6800 | 0.1703 | 0.9319 | 0.9319 |
| 0.1315 | 18.92 | 7000 | 0.1683 | 0.9328 | 0.9328 |
| 0.1321 | 19.46 | 7200 | 0.1685 | 0.9336 | 0.9336 |
| 0.1281 | 20.0 | 7400 | 0.1742 | 0.9317 | 0.9318 |
| 0.1271 | 20.54 | 7600 | 0.1778 | 0.9312 | 0.9313 |
| 0.1267 | 21.08 | 7800 | 0.1670 | 0.9321 | 0.9321 |
| 0.1246 | 21.62 | 8000 | 0.1707 | 0.9316 | 0.9316 |
| 0.1247 | 22.16 | 8200 | 0.1693 | 0.9319 | 0.9319 |
| 0.1212 | 22.7 | 8400 | 0.1682 | 0.9309 | 0.9309 |
| 0.1214 | 23.24 | 8600 | 0.1704 | 0.9324 | 0.9324 |
| 0.1209 | 23.78 | 8800 | 0.1698 | 0.9326 | 0.9326 |
| 0.1226 | 24.32 | 9000 | 0.1729 | 0.9319 | 0.9319 |
| 0.1208 | 24.86 | 9200 | 0.1707 | 0.9307 | 0.9307 |
| 0.1181 | 25.41 | 9400 | 0.1729 | 0.9319 | 0.9319 |
| 0.1206 | 25.95 | 9600 | 0.1718 | 0.9336 | 0.9336 |
| 0.1184 | 26.49 | 9800 | 0.1713 | 0.9329 | 0.9329 |
| 0.1156 | 27.03 | 10000 | 0.1710 | 0.9323 | 0.9323 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:21:36+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_8192\_512\_30M-L32\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1998
* F1 Score: 0.9263
* Accuracy: 0.9264
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "widget": [{"example_title": "OpenBioLLM-70B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-70B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-70B-6.0bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"conversational",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | null | 2024-04-27T02:22:55+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-70B builds upon the powerful foundations of the Meta-Llama-3-70B-Instruct and Meta-Llama-3-70B-Instruct models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 70 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: Meta-Llama-3-70B-Instruct</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-70B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 8</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #conversational #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | azferruolo/llamacheck | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-27T02:24:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# miqu-evil-dpo
# **Model Details**
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.

<!-- prompt-template start -->
## Prompt template: Mistral Inst
```
<s> [INST] {inst} [/INST]
```
<!-- prompt-template end -->
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| {"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"} | blockblockblock/miqu-evil-dpo-bpw4.6-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:25:59+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# miqu-evil-dpo
# Model Details
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.
!image/png
## Prompt template: Mistral Inst
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| [
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4945
- F1 Score: 0.7686
- Accuracy: 0.7673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6069 | 0.97 | 200 | 0.5823 | 0.7034 | 0.7023 |
| 0.5582 | 1.93 | 400 | 0.5384 | 0.7434 | 0.7416 |
| 0.5385 | 2.9 | 600 | 0.5562 | 0.7392 | 0.7383 |
| 0.5295 | 3.86 | 800 | 0.5050 | 0.7630 | 0.7628 |
| 0.5228 | 4.83 | 1000 | 0.5280 | 0.7561 | 0.7546 |
| 0.5188 | 5.8 | 1200 | 0.5280 | 0.7525 | 0.7510 |
| 0.5137 | 6.76 | 1400 | 0.5107 | 0.7599 | 0.7582 |
| 0.5103 | 7.73 | 1600 | 0.5023 | 0.7646 | 0.7631 |
| 0.5097 | 8.7 | 1800 | 0.5078 | 0.7611 | 0.7595 |
| 0.5055 | 9.66 | 2000 | 0.5113 | 0.7608 | 0.7592 |
| 0.5082 | 10.63 | 2200 | 0.5204 | 0.7578 | 0.7564 |
| 0.4985 | 11.59 | 2400 | 0.4974 | 0.7689 | 0.7673 |
| 0.5012 | 12.56 | 2600 | 0.5069 | 0.7656 | 0.7640 |
| 0.4994 | 13.53 | 2800 | 0.5063 | 0.7596 | 0.7579 |
| 0.4975 | 14.49 | 3000 | 0.5166 | 0.7608 | 0.7595 |
| 0.4906 | 15.46 | 3200 | 0.4934 | 0.7716 | 0.7703 |
| 0.4976 | 16.43 | 3400 | 0.5110 | 0.7600 | 0.7585 |
| 0.4948 | 17.39 | 3600 | 0.4965 | 0.7683 | 0.7667 |
| 0.4896 | 18.36 | 3800 | 0.4927 | 0.7708 | 0.7694 |
| 0.4899 | 19.32 | 4000 | 0.4967 | 0.7707 | 0.7691 |
| 0.4869 | 20.29 | 4200 | 0.4993 | 0.7689 | 0.7673 |
| 0.4902 | 21.26 | 4400 | 0.4958 | 0.7668 | 0.7652 |
| 0.4864 | 22.22 | 4600 | 0.5079 | 0.7607 | 0.7592 |
| 0.4881 | 23.19 | 4800 | 0.4931 | 0.7683 | 0.7667 |
| 0.4861 | 24.15 | 5000 | 0.4943 | 0.7689 | 0.7673 |
| 0.4843 | 25.12 | 5200 | 0.5030 | 0.7677 | 0.7661 |
| 0.4841 | 26.09 | 5400 | 0.4945 | 0.7695 | 0.7679 |
| 0.485 | 27.05 | 5600 | 0.5070 | 0.7619 | 0.7604 |
| 0.483 | 28.02 | 5800 | 0.4859 | 0.7709 | 0.7694 |
| 0.4831 | 28.99 | 6000 | 0.4998 | 0.7698 | 0.7682 |
| 0.4795 | 29.95 | 6200 | 0.5047 | 0.7661 | 0.7646 |
| 0.482 | 30.92 | 6400 | 0.4836 | 0.7738 | 0.7725 |
| 0.4813 | 31.88 | 6600 | 0.4907 | 0.7746 | 0.7731 |
| 0.4786 | 32.85 | 6800 | 0.4962 | 0.7722 | 0.7707 |
| 0.4808 | 33.82 | 7000 | 0.5003 | 0.7695 | 0.7679 |
| 0.478 | 34.78 | 7200 | 0.4982 | 0.7689 | 0.7673 |
| 0.4816 | 35.75 | 7400 | 0.4932 | 0.7728 | 0.7713 |
| 0.4749 | 36.71 | 7600 | 0.4957 | 0.7728 | 0.7713 |
| 0.4774 | 37.68 | 7800 | 0.4944 | 0.7722 | 0.7707 |
| 0.478 | 38.65 | 8000 | 0.4871 | 0.7728 | 0.7713 |
| 0.4847 | 39.61 | 8200 | 0.4880 | 0.7737 | 0.7722 |
| 0.4741 | 40.58 | 8400 | 0.4972 | 0.7680 | 0.7664 |
| 0.476 | 41.55 | 8600 | 0.4919 | 0.7728 | 0.7713 |
| 0.4746 | 42.51 | 8800 | 0.4935 | 0.7719 | 0.7703 |
| 0.4785 | 43.48 | 9000 | 0.4998 | 0.7683 | 0.7667 |
| 0.478 | 44.44 | 9200 | 0.4946 | 0.7722 | 0.7707 |
| 0.4783 | 45.41 | 9400 | 0.4920 | 0.7722 | 0.7707 |
| 0.4721 | 46.38 | 9600 | 0.4960 | 0.7695 | 0.7679 |
| 0.4796 | 47.34 | 9800 | 0.4940 | 0.7710 | 0.7694 |
| 0.4738 | 48.31 | 10000 | 0.4918 | 0.7716 | 0.7700 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:27:15+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_8192\_512\_30M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4945
* F1 Score: 0.7686
* Accuracy: 0.7673
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # Medwest
Just testing my method task_swapping.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the task_swapping merge method using [internistai/base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: internistai/base-7b-v0.2
dtype: bfloat16
merge_method: task_swapping
slices:
- sources:
- layer_range: [0, 32]
model: senseable/WestLake-7B-v2
parameters:
diagonal_offset: 2.0
weight: 1.0
- layer_range: [0, 32]
model: internistai/base-7b-v0.2
``` | {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["senseable/WestLake-7B-v2", "internistai/base-7b-v0.2"]} | Aryanne/MedWest-7B | null | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:senseable/WestLake-7B-v2",
"base_model:internistai/base-7b-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:27:41+00:00 | [] | [] | TAGS
#transformers #safetensors #gguf #mistral #text-generation #mergekit #merge #conversational #base_model-senseable/WestLake-7B-v2 #base_model-internistai/base-7b-v0.2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Medwest
Just testing my method task_swapping.
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the task_swapping merge method using internistai/base-7b-v0.2 as a base.
### Models Merged
The following models were included in the merge:
* senseable/WestLake-7B-v2
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Medwest\nJust testing my method task_swapping.\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task_swapping merge method using internistai/base-7b-v0.2 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* senseable/WestLake-7B-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #gguf #mistral #text-generation #mergekit #merge #conversational #base_model-senseable/WestLake-7B-v2 #base_model-internistai/base-7b-v0.2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Medwest\nJust testing my method task_swapping.\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task_swapping merge method using internistai/base-7b-v0.2 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* senseable/WestLake-7B-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | UsamaCoder/finetunedLlama-python-C | null | [
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2024-04-27T02:28:43+00:00 | [] | [] | TAGS
#peft #pytorch #llama #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #pytorch #llama #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | zandfj/LLaMA2-7B-Chat-dpo-f-042618_MIx | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T02:32:01+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]} | cashu/llama_3_16bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:35:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | null |
gguf quantized version of https://huggingface.co/ByteDance/SDXL-Lightning | {"license": "openrail++", "pipeline_tag": "text-to-image"} | OlegSkutte/SDXL-Lightning-GGUF | null | [
"gguf",
"text-to-image",
"license:openrail++",
"region:us"
] | null | 2024-04-27T02:42:24+00:00 | [] | [] | TAGS
#gguf #text-to-image #license-openrail++ #region-us
|
gguf quantized version of URL | [] | [
"TAGS\n#gguf #text-to-image #license-openrail++ #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4967
- F1 Score: 0.7672
- Accuracy: 0.7661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5906 | 0.97 | 200 | 0.5439 | 0.7379 | 0.7362 |
| 0.5318 | 1.93 | 400 | 0.5112 | 0.7610 | 0.7595 |
| 0.5162 | 2.9 | 600 | 0.5412 | 0.7384 | 0.7380 |
| 0.51 | 3.86 | 800 | 0.4929 | 0.7745 | 0.7737 |
| 0.503 | 4.83 | 1000 | 0.5124 | 0.7627 | 0.7613 |
| 0.4981 | 5.8 | 1200 | 0.4982 | 0.7677 | 0.7661 |
| 0.4922 | 6.76 | 1400 | 0.4873 | 0.7722 | 0.7707 |
| 0.4884 | 7.73 | 1600 | 0.4937 | 0.7695 | 0.7679 |
| 0.4866 | 8.7 | 1800 | 0.4929 | 0.7680 | 0.7664 |
| 0.4818 | 9.66 | 2000 | 0.4900 | 0.7746 | 0.7731 |
| 0.4833 | 10.63 | 2200 | 0.5128 | 0.7662 | 0.7649 |
| 0.475 | 11.59 | 2400 | 0.4817 | 0.7797 | 0.7782 |
| 0.4749 | 12.56 | 2600 | 0.4902 | 0.7734 | 0.7719 |
| 0.4714 | 13.53 | 2800 | 0.4929 | 0.7719 | 0.7703 |
| 0.4716 | 14.49 | 3000 | 0.4969 | 0.7681 | 0.7667 |
| 0.4621 | 15.46 | 3200 | 0.4806 | 0.7789 | 0.7776 |
| 0.468 | 16.43 | 3400 | 0.4978 | 0.7682 | 0.7667 |
| 0.4639 | 17.39 | 3600 | 0.4814 | 0.7742 | 0.7728 |
| 0.4581 | 18.36 | 3800 | 0.4766 | 0.7747 | 0.7740 |
| 0.4579 | 19.32 | 4000 | 0.4889 | 0.7797 | 0.7782 |
| 0.4541 | 20.29 | 4200 | 0.4879 | 0.7748 | 0.7734 |
| 0.4542 | 21.26 | 4400 | 0.4880 | 0.7803 | 0.7788 |
| 0.4502 | 22.22 | 4600 | 0.5049 | 0.7697 | 0.7682 |
| 0.4505 | 23.19 | 4800 | 0.4831 | 0.7804 | 0.7791 |
| 0.4495 | 24.15 | 5000 | 0.4909 | 0.7752 | 0.7737 |
| 0.4462 | 25.12 | 5200 | 0.5006 | 0.7758 | 0.7743 |
| 0.4447 | 26.09 | 5400 | 0.4841 | 0.7797 | 0.7782 |
| 0.444 | 27.05 | 5600 | 0.5031 | 0.7679 | 0.7664 |
| 0.4405 | 28.02 | 5800 | 0.4820 | 0.7794 | 0.7782 |
| 0.4392 | 28.99 | 6000 | 0.4842 | 0.7752 | 0.7737 |
| 0.4358 | 29.95 | 6200 | 0.4920 | 0.7743 | 0.7728 |
| 0.4363 | 30.92 | 6400 | 0.4814 | 0.7765 | 0.7764 |
| 0.4365 | 31.88 | 6600 | 0.4851 | 0.7767 | 0.7755 |
| 0.4313 | 32.85 | 6800 | 0.4955 | 0.7722 | 0.7707 |
| 0.4336 | 33.82 | 7000 | 0.4930 | 0.7719 | 0.7703 |
| 0.4298 | 34.78 | 7200 | 0.5006 | 0.7707 | 0.7691 |
| 0.4335 | 35.75 | 7400 | 0.4909 | 0.7772 | 0.7758 |
| 0.425 | 36.71 | 7600 | 0.4977 | 0.7755 | 0.7740 |
| 0.4289 | 37.68 | 7800 | 0.4993 | 0.7749 | 0.7734 |
| 0.4263 | 38.65 | 8000 | 0.4875 | 0.7781 | 0.7767 |
| 0.4337 | 39.61 | 8200 | 0.4826 | 0.7785 | 0.7773 |
| 0.4211 | 40.58 | 8400 | 0.4989 | 0.7749 | 0.7734 |
| 0.4228 | 41.55 | 8600 | 0.4944 | 0.7763 | 0.7749 |
| 0.4203 | 42.51 | 8800 | 0.4904 | 0.7726 | 0.7713 |
| 0.4256 | 43.48 | 9000 | 0.5018 | 0.7719 | 0.7703 |
| 0.4211 | 44.44 | 9200 | 0.4971 | 0.7716 | 0.7700 |
| 0.4259 | 45.41 | 9400 | 0.4937 | 0.7755 | 0.7740 |
| 0.4148 | 46.38 | 9600 | 0.4993 | 0.7737 | 0.7722 |
| 0.4243 | 47.34 | 9800 | 0.4932 | 0.7748 | 0.7734 |
| 0.4207 | 48.31 | 10000 | 0.4919 | 0.7751 | 0.7737 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:44:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_8192\_512\_30M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4967
* F1 Score: 0.7672
* Accuracy: 0.7661
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4901
- F1 Score: 0.7690
- Accuracy: 0.7688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5746 | 0.97 | 200 | 0.5157 | 0.7559 | 0.7543 |
| 0.5174 | 1.93 | 400 | 0.4942 | 0.7703 | 0.7688 |
| 0.5017 | 2.9 | 600 | 0.5152 | 0.7542 | 0.7531 |
| 0.4951 | 3.86 | 800 | 0.4836 | 0.7753 | 0.7746 |
| 0.4875 | 4.83 | 1000 | 0.5032 | 0.7625 | 0.7610 |
| 0.4815 | 5.8 | 1200 | 0.4971 | 0.7676 | 0.7661 |
| 0.4737 | 6.76 | 1400 | 0.4861 | 0.7728 | 0.7713 |
| 0.4679 | 7.73 | 1600 | 0.4964 | 0.7659 | 0.7643 |
| 0.463 | 8.7 | 1800 | 0.5063 | 0.7630 | 0.7616 |
| 0.4547 | 9.66 | 2000 | 0.4940 | 0.7704 | 0.7688 |
| 0.4548 | 10.63 | 2200 | 0.5152 | 0.7631 | 0.7616 |
| 0.4445 | 11.59 | 2400 | 0.4812 | 0.7752 | 0.7740 |
| 0.4386 | 12.56 | 2600 | 0.5055 | 0.7727 | 0.7713 |
| 0.4306 | 13.53 | 2800 | 0.5064 | 0.7692 | 0.7676 |
| 0.431 | 14.49 | 3000 | 0.4957 | 0.7712 | 0.7697 |
| 0.415 | 15.46 | 3200 | 0.5090 | 0.7696 | 0.7685 |
| 0.4186 | 16.43 | 3400 | 0.5115 | 0.7704 | 0.7688 |
| 0.4127 | 17.39 | 3600 | 0.5006 | 0.7649 | 0.7637 |
| 0.4009 | 18.36 | 3800 | 0.5020 | 0.7676 | 0.7673 |
| 0.3987 | 19.32 | 4000 | 0.5140 | 0.7696 | 0.7688 |
| 0.3925 | 20.29 | 4200 | 0.5288 | 0.7559 | 0.7555 |
| 0.3916 | 21.26 | 4400 | 0.5305 | 0.7707 | 0.7691 |
| 0.3798 | 22.22 | 4600 | 0.5455 | 0.7649 | 0.7634 |
| 0.3791 | 23.19 | 4800 | 0.5304 | 0.7660 | 0.7652 |
| 0.3712 | 24.15 | 5000 | 0.5595 | 0.7589 | 0.7573 |
| 0.3705 | 25.12 | 5200 | 0.5477 | 0.7628 | 0.7613 |
| 0.3614 | 26.09 | 5400 | 0.5390 | 0.7601 | 0.7589 |
| 0.3579 | 27.05 | 5600 | 0.5539 | 0.7569 | 0.7552 |
| 0.3532 | 28.02 | 5800 | 0.5317 | 0.7632 | 0.7622 |
| 0.35 | 28.99 | 6000 | 0.5360 | 0.7572 | 0.7564 |
| 0.3415 | 29.95 | 6200 | 0.5697 | 0.7554 | 0.7546 |
| 0.3376 | 30.92 | 6400 | 0.5587 | 0.7627 | 0.7634 |
| 0.3348 | 31.88 | 6600 | 0.5711 | 0.7533 | 0.7519 |
| 0.3278 | 32.85 | 6800 | 0.5775 | 0.7595 | 0.7582 |
| 0.3289 | 33.82 | 7000 | 0.5727 | 0.7613 | 0.7601 |
| 0.3226 | 34.78 | 7200 | 0.5911 | 0.7624 | 0.7610 |
| 0.3255 | 35.75 | 7400 | 0.5886 | 0.7499 | 0.7486 |
| 0.3174 | 36.71 | 7600 | 0.5878 | 0.7625 | 0.7616 |
| 0.3152 | 37.68 | 7800 | 0.6031 | 0.7584 | 0.7570 |
| 0.3103 | 38.65 | 8000 | 0.5939 | 0.7590 | 0.7579 |
| 0.3155 | 39.61 | 8200 | 0.5886 | 0.7575 | 0.7567 |
| 0.3043 | 40.58 | 8400 | 0.6121 | 0.7554 | 0.7540 |
| 0.302 | 41.55 | 8600 | 0.6296 | 0.7578 | 0.7564 |
| 0.2983 | 42.51 | 8800 | 0.6113 | 0.7573 | 0.7564 |
| 0.3019 | 43.48 | 9000 | 0.6239 | 0.7536 | 0.7522 |
| 0.2971 | 44.44 | 9200 | 0.6205 | 0.7577 | 0.7564 |
| 0.2991 | 45.41 | 9400 | 0.6207 | 0.7572 | 0.7558 |
| 0.2918 | 46.38 | 9600 | 0.6238 | 0.7578 | 0.7564 |
| 0.2998 | 47.34 | 9800 | 0.6179 | 0.7564 | 0.7552 |
| 0.2939 | 48.31 | 10000 | 0.6184 | 0.7557 | 0.7546 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:45:47+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_8192\_512\_30M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4901
* F1 Score: 0.7690
* Accuracy: 0.7688
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5958
- F1 Score: 0.6738
- Accuracy: 0.6755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6591 | 1.04 | 200 | 0.6316 | 0.5954 | 0.6409 |
| 0.6239 | 2.08 | 400 | 0.6231 | 0.6562 | 0.6556 |
| 0.6154 | 3.12 | 600 | 0.6142 | 0.6453 | 0.6686 |
| 0.6117 | 4.17 | 800 | 0.6111 | 0.6695 | 0.6722 |
| 0.6093 | 5.21 | 1000 | 0.6122 | 0.6686 | 0.6689 |
| 0.606 | 6.25 | 1200 | 0.6043 | 0.6673 | 0.6722 |
| 0.5999 | 7.29 | 1400 | 0.6149 | 0.6710 | 0.6696 |
| 0.6007 | 8.33 | 1600 | 0.6111 | 0.6744 | 0.6745 |
| 0.6002 | 9.38 | 1800 | 0.6209 | 0.6636 | 0.6611 |
| 0.5964 | 10.42 | 2000 | 0.6083 | 0.6740 | 0.6729 |
| 0.5991 | 11.46 | 2200 | 0.5991 | 0.6736 | 0.6794 |
| 0.5887 | 12.5 | 2400 | 0.6031 | 0.6823 | 0.6836 |
| 0.5911 | 13.54 | 2600 | 0.6048 | 0.6779 | 0.6794 |
| 0.5912 | 14.58 | 2800 | 0.6017 | 0.6779 | 0.6820 |
| 0.5953 | 15.62 | 3000 | 0.6044 | 0.6795 | 0.6804 |
| 0.5873 | 16.67 | 3200 | 0.6081 | 0.6732 | 0.6719 |
| 0.5886 | 17.71 | 3400 | 0.6009 | 0.6798 | 0.6817 |
| 0.5863 | 18.75 | 3600 | 0.5991 | 0.6805 | 0.6852 |
| 0.5888 | 19.79 | 3800 | 0.6010 | 0.6764 | 0.6771 |
| 0.5835 | 20.83 | 4000 | 0.6038 | 0.6748 | 0.6742 |
| 0.5859 | 21.88 | 4200 | 0.6095 | 0.6776 | 0.6761 |
| 0.5829 | 22.92 | 4400 | 0.6040 | 0.6800 | 0.6797 |
| 0.5837 | 23.96 | 4600 | 0.6061 | 0.6761 | 0.6748 |
| 0.5801 | 25.0 | 4800 | 0.6026 | 0.6804 | 0.6807 |
| 0.58 | 26.04 | 5000 | 0.6049 | 0.6808 | 0.6804 |
| 0.5785 | 27.08 | 5200 | 0.6068 | 0.6748 | 0.6735 |
| 0.5794 | 28.12 | 5400 | 0.6059 | 0.6777 | 0.6768 |
| 0.579 | 29.17 | 5600 | 0.6051 | 0.6790 | 0.6781 |
| 0.5768 | 30.21 | 5800 | 0.6068 | 0.6793 | 0.6787 |
| 0.5783 | 31.25 | 6000 | 0.6033 | 0.6803 | 0.6800 |
| 0.5766 | 32.29 | 6200 | 0.5977 | 0.6847 | 0.6865 |
| 0.5785 | 33.33 | 6400 | 0.6007 | 0.6833 | 0.6843 |
| 0.5764 | 34.38 | 6600 | 0.5983 | 0.6807 | 0.6823 |
| 0.5744 | 35.42 | 6800 | 0.6027 | 0.6804 | 0.6804 |
| 0.5746 | 36.46 | 7000 | 0.5975 | 0.6859 | 0.6882 |
| 0.5712 | 37.5 | 7200 | 0.6051 | 0.6773 | 0.6764 |
| 0.5725 | 38.54 | 7400 | 0.6034 | 0.6820 | 0.6820 |
| 0.5767 | 39.58 | 7600 | 0.5990 | 0.6821 | 0.6833 |
| 0.5764 | 40.62 | 7800 | 0.6066 | 0.6774 | 0.6758 |
| 0.5715 | 41.67 | 8000 | 0.6026 | 0.6822 | 0.6817 |
| 0.5677 | 42.71 | 8200 | 0.5987 | 0.6823 | 0.6839 |
| 0.5722 | 43.75 | 8400 | 0.6011 | 0.6791 | 0.6790 |
| 0.5724 | 44.79 | 8600 | 0.6043 | 0.6802 | 0.6794 |
| 0.5693 | 45.83 | 8800 | 0.6004 | 0.6787 | 0.6790 |
| 0.5691 | 46.88 | 9000 | 0.6061 | 0.6801 | 0.6790 |
| 0.5712 | 47.92 | 9200 | 0.6039 | 0.6823 | 0.6817 |
| 0.57 | 48.96 | 9400 | 0.6020 | 0.6817 | 0.6817 |
| 0.5704 | 50.0 | 9600 | 0.6028 | 0.6806 | 0.6804 |
| 0.5707 | 51.04 | 9800 | 0.6033 | 0.6824 | 0.6820 |
| 0.5682 | 52.08 | 10000 | 0.6036 | 0.6815 | 0.6810 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T02:46:12+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_8192\_512\_30M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5958
* F1 Score: 0.6738
* Accuracy: 0.6755
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
### Using .generate()
```python
from transformers import GenerationConfig, T5ForConditionalGeneration, T5Tokenizer
model_name = "cu-kairos/propbank_srl_seq2seq_t5_small"
model = T5ForConditionalGeneration.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
generation_config = GenerationConfig.from_pretrained(model_name)
tokenized_inputs = tokenizer(["SRL for [put]: That fund was [put] together by Blackstone Group ."], return_tensors="pt")
outputs = model.generate(**tokenized_inputs, generation_config=generation_config)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# ['ARG-1: That fund | ARG-2: together | ARG-0: by Blackstone Group ']
```
### Using pipeline
```python
from transformers import pipeline
srl = pipeline("cu-kairos/propbank_srl_seq2seq_t5_small")
print(srl(["SRL for [put]: That fund was [put] together by Blackstone Group ."]))
# [{'generated_text': 'ARG-1: That fund | ARG-2: together | ARG-0: by Blackstone Group '}]
```
| {"license": "apache-2.0"} | cu-kairos/propbank_srl_seq2seq_t5_small | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:46:48+00:00 | [] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
### Using .generate()
### Using pipeline
| [
"### Using .generate()",
"### Using pipeline"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Using .generate()",
"### Using pipeline"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.