pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-petco-text_content-ctr
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0034
- Mse: 0.0034
- Rmse: 0.0586
- Mae: 0.0408
- R2: 0.4036
- Accuracy: 0.6833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:--------:|
| 0.0239 | 1.0 | 15 | 0.0049 | 0.0049 | 0.0701 | 0.0519 | 0.1473 | 0.5833 |
| 0.0095 | 2.0 | 30 | 0.0047 | 0.0047 | 0.0688 | 0.0537 | 0.1774 | 0.5667 |
| 0.0071 | 3.0 | 45 | 0.0057 | 0.0057 | 0.0756 | 0.0643 | 0.0065 | 0.4 |
| 0.0062 | 4.0 | 60 | 0.0046 | 0.0046 | 0.0675 | 0.0544 | 0.2089 | 0.5 |
| 0.0058 | 5.0 | 75 | 0.0048 | 0.0048 | 0.0692 | 0.0495 | 0.1682 | 0.6833 |
| 0.0048 | 6.0 | 90 | 0.0046 | 0.0046 | 0.0678 | 0.0543 | 0.2014 | 0.5 |
| 0.0042 | 7.0 | 105 | 0.0039 | 0.0039 | 0.0621 | 0.0465 | 0.3295 | 0.6833 |
| 0.0034 | 8.0 | 120 | 0.0038 | 0.0038 | 0.0617 | 0.0444 | 0.3382 | 0.6667 |
| 0.0031 | 9.0 | 135 | 0.0040 | 0.0040 | 0.0630 | 0.0462 | 0.3106 | 0.6667 |
| 0.0037 | 10.0 | 150 | 0.0040 | 0.0040 | 0.0629 | 0.0439 | 0.3140 | 0.7167 |
| 0.0028 | 11.0 | 165 | 0.0041 | 0.0041 | 0.0638 | 0.0439 | 0.2942 | 0.6833 |
| 0.0027 | 12.0 | 180 | 0.0041 | 0.0041 | 0.0642 | 0.0447 | 0.2854 | 0.7167 |
| 0.0026 | 13.0 | 195 | 0.0036 | 0.0036 | 0.0598 | 0.0422 | 0.3788 | 0.7 |
| 0.0025 | 14.0 | 210 | 0.0034 | 0.0034 | 0.0587 | 0.0420 | 0.4021 | 0.6833 |
| 0.002 | 15.0 | 225 | 0.0034 | 0.0034 | 0.0586 | 0.0408 | 0.4036 | 0.6833 |
| 0.0022 | 16.0 | 240 | 0.0037 | 0.0037 | 0.0607 | 0.0420 | 0.3610 | 0.7 |
| 0.0019 | 17.0 | 255 | 0.0037 | 0.0037 | 0.0607 | 0.0416 | 0.3595 | 0.7167 |
| 0.0018 | 18.0 | 270 | 0.0037 | 0.0037 | 0.0612 | 0.0423 | 0.3493 | 0.6833 |
| 0.0018 | 19.0 | 285 | 0.0036 | 0.0036 | 0.0597 | 0.0409 | 0.3804 | 0.7167 |
| 0.0019 | 20.0 | 300 | 0.0035 | 0.0035 | 0.0589 | 0.0407 | 0.3967 | 0.6667 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "bert-petco-text_content-ctr", "results": []}]} | yimiwang/bert-petco-text_content-ctr | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:01:12+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-petco-text\_content-ctr
============================
This model is a fine-tuned version of google-bert/bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0034
* Mse: 0.0034
* Rmse: 0.0586
* Mae: 0.0408
* R2: 0.4036
* Accuracy: 0.6833
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/dumbo-krillin31 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:01:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6302
- F1 Score: 0.6967
- Accuracy: 0.6975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6611 | 8.33 | 200 | 0.6268 | 0.6481 | 0.6481 |
| 0.6159 | 16.67 | 400 | 0.6095 | 0.6674 | 0.6674 |
| 0.5941 | 25.0 | 600 | 0.6010 | 0.6759 | 0.6770 |
| 0.5758 | 33.33 | 800 | 0.5973 | 0.6806 | 0.6813 |
| 0.5625 | 41.67 | 1000 | 0.6054 | 0.6821 | 0.6833 |
| 0.5519 | 50.0 | 1200 | 0.6067 | 0.6821 | 0.6831 |
| 0.5444 | 58.33 | 1400 | 0.5939 | 0.6838 | 0.6840 |
| 0.5395 | 66.67 | 1600 | 0.6069 | 0.6844 | 0.6858 |
| 0.534 | 75.0 | 1800 | 0.5974 | 0.6923 | 0.6924 |
| 0.5293 | 83.33 | 2000 | 0.5989 | 0.6898 | 0.6899 |
| 0.5267 | 91.67 | 2200 | 0.5919 | 0.6826 | 0.6838 |
| 0.5219 | 100.0 | 2400 | 0.5978 | 0.6793 | 0.6809 |
| 0.5191 | 108.33 | 2600 | 0.6174 | 0.6841 | 0.6856 |
| 0.515 | 116.67 | 2800 | 0.6096 | 0.6896 | 0.6902 |
| 0.5116 | 125.0 | 3000 | 0.6052 | 0.6884 | 0.6889 |
| 0.5076 | 133.33 | 3200 | 0.6123 | 0.6882 | 0.6890 |
| 0.5049 | 141.67 | 3400 | 0.6130 | 0.6885 | 0.6894 |
| 0.5005 | 150.0 | 3600 | 0.6115 | 0.6840 | 0.6867 |
| 0.4969 | 158.33 | 3800 | 0.6182 | 0.6844 | 0.6868 |
| 0.4941 | 166.67 | 4000 | 0.6090 | 0.6860 | 0.6875 |
| 0.4913 | 175.0 | 4200 | 0.6113 | 0.6798 | 0.6831 |
| 0.4884 | 183.33 | 4400 | 0.6108 | 0.6925 | 0.6927 |
| 0.4839 | 191.67 | 4600 | 0.6208 | 0.6890 | 0.6895 |
| 0.4833 | 200.0 | 4800 | 0.6113 | 0.6844 | 0.6861 |
| 0.4798 | 208.33 | 5000 | 0.6154 | 0.6909 | 0.6910 |
| 0.4763 | 216.67 | 5200 | 0.6153 | 0.6827 | 0.6843 |
| 0.4757 | 225.0 | 5400 | 0.6456 | 0.6797 | 0.6826 |
| 0.4725 | 233.33 | 5600 | 0.6146 | 0.6894 | 0.6897 |
| 0.4692 | 241.67 | 5800 | 0.6309 | 0.6859 | 0.6873 |
| 0.4682 | 250.0 | 6000 | 0.6234 | 0.6865 | 0.6877 |
| 0.466 | 258.33 | 6200 | 0.6317 | 0.6899 | 0.6910 |
| 0.4609 | 266.67 | 6400 | 0.6342 | 0.6872 | 0.6883 |
| 0.4614 | 275.0 | 6600 | 0.6399 | 0.6926 | 0.6932 |
| 0.4586 | 283.33 | 6800 | 0.6349 | 0.6909 | 0.6919 |
| 0.4587 | 291.67 | 7000 | 0.6324 | 0.6886 | 0.6897 |
| 0.4563 | 300.0 | 7200 | 0.6246 | 0.6884 | 0.6894 |
| 0.4531 | 308.33 | 7400 | 0.6377 | 0.6803 | 0.6824 |
| 0.4537 | 316.67 | 7600 | 0.6430 | 0.6789 | 0.6819 |
| 0.4505 | 325.0 | 7800 | 0.6613 | 0.6837 | 0.6860 |
| 0.4512 | 333.33 | 8000 | 0.6427 | 0.6905 | 0.6919 |
| 0.4483 | 341.67 | 8200 | 0.6438 | 0.6895 | 0.6905 |
| 0.4477 | 350.0 | 8400 | 0.6380 | 0.6873 | 0.6889 |
| 0.4452 | 358.33 | 8600 | 0.6406 | 0.6910 | 0.6917 |
| 0.4454 | 366.67 | 8800 | 0.6418 | 0.6891 | 0.6909 |
| 0.4443 | 375.0 | 9000 | 0.6458 | 0.6846 | 0.6872 |
| 0.4441 | 383.33 | 9200 | 0.6369 | 0.6859 | 0.6875 |
| 0.444 | 391.67 | 9400 | 0.6422 | 0.6888 | 0.6904 |
| 0.4429 | 400.0 | 9600 | 0.6445 | 0.6883 | 0.6900 |
| 0.4415 | 408.33 | 9800 | 0.6479 | 0.6879 | 0.6897 |
| 0.4414 | 416.67 | 10000 | 0.6460 | 0.6888 | 0.6904 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:07:07+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_32768\_512\_30M-L32\_all
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6302
* F1 Score: 0.6967
* Accuracy: 0.6975
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5802
- F1 Score: 0.7077
- Accuracy: 0.7077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6535 | 9.52 | 200 | 0.6098 | 0.6697 | 0.6697 |
| 0.6046 | 19.05 | 400 | 0.5913 | 0.6813 | 0.6814 |
| 0.5828 | 28.57 | 600 | 0.5843 | 0.6931 | 0.6930 |
| 0.5619 | 38.1 | 800 | 0.5790 | 0.7018 | 0.7025 |
| 0.5455 | 47.62 | 1000 | 0.5761 | 0.7079 | 0.7079 |
| 0.5346 | 57.14 | 1200 | 0.5799 | 0.7106 | 0.7106 |
| 0.528 | 66.67 | 1400 | 0.5699 | 0.7115 | 0.7117 |
| 0.5216 | 76.19 | 1600 | 0.5790 | 0.7109 | 0.7111 |
| 0.5155 | 85.71 | 1800 | 0.5836 | 0.7047 | 0.7059 |
| 0.5102 | 95.24 | 2000 | 0.5794 | 0.7128 | 0.7128 |
| 0.5053 | 104.76 | 2200 | 0.5802 | 0.7129 | 0.7130 |
| 0.4995 | 114.29 | 2400 | 0.5786 | 0.7145 | 0.7145 |
| 0.4957 | 123.81 | 2600 | 0.5843 | 0.7099 | 0.7104 |
| 0.4904 | 133.33 | 2800 | 0.5909 | 0.7116 | 0.7117 |
| 0.487 | 142.86 | 3000 | 0.5822 | 0.7088 | 0.7093 |
| 0.4827 | 152.38 | 3200 | 0.5934 | 0.7071 | 0.7079 |
| 0.4775 | 161.9 | 3400 | 0.5911 | 0.7094 | 0.7094 |
| 0.4739 | 171.43 | 3600 | 0.5898 | 0.7113 | 0.7113 |
| 0.47 | 180.95 | 3800 | 0.6068 | 0.7022 | 0.7036 |
| 0.4665 | 190.48 | 4000 | 0.5952 | 0.7035 | 0.7045 |
| 0.4621 | 200.0 | 4200 | 0.6007 | 0.7115 | 0.7115 |
| 0.4579 | 209.52 | 4400 | 0.5988 | 0.7045 | 0.7051 |
| 0.4549 | 219.05 | 4600 | 0.6137 | 0.7105 | 0.7106 |
| 0.4511 | 228.57 | 4800 | 0.6069 | 0.7093 | 0.7094 |
| 0.4478 | 238.1 | 5000 | 0.6266 | 0.7071 | 0.7076 |
| 0.4434 | 247.62 | 5200 | 0.6117 | 0.7059 | 0.7062 |
| 0.4406 | 257.14 | 5400 | 0.6142 | 0.7079 | 0.7081 |
| 0.4373 | 266.67 | 5600 | 0.6192 | 0.7028 | 0.7036 |
| 0.4351 | 276.19 | 5800 | 0.6327 | 0.6984 | 0.6996 |
| 0.4307 | 285.71 | 6000 | 0.6301 | 0.6979 | 0.6991 |
| 0.4284 | 295.24 | 6200 | 0.6366 | 0.7021 | 0.7028 |
| 0.4259 | 304.76 | 6400 | 0.6423 | 0.6952 | 0.6976 |
| 0.4243 | 314.29 | 6600 | 0.6346 | 0.7010 | 0.7017 |
| 0.4213 | 323.81 | 6800 | 0.6294 | 0.7029 | 0.7030 |
| 0.4185 | 333.33 | 7000 | 0.6424 | 0.6974 | 0.6983 |
| 0.4163 | 342.86 | 7200 | 0.6317 | 0.7029 | 0.7032 |
| 0.4143 | 352.38 | 7400 | 0.6360 | 0.6958 | 0.6964 |
| 0.4121 | 361.9 | 7600 | 0.6386 | 0.6982 | 0.6991 |
| 0.4104 | 371.43 | 7800 | 0.6399 | 0.7014 | 0.7019 |
| 0.4094 | 380.95 | 8000 | 0.6385 | 0.6996 | 0.6998 |
| 0.4062 | 390.48 | 8200 | 0.6434 | 0.7005 | 0.7010 |
| 0.406 | 400.0 | 8400 | 0.6429 | 0.6967 | 0.6972 |
| 0.4041 | 409.52 | 8600 | 0.6429 | 0.6965 | 0.6972 |
| 0.4028 | 419.05 | 8800 | 0.6471 | 0.6946 | 0.6957 |
| 0.4013 | 428.57 | 9000 | 0.6459 | 0.7013 | 0.7017 |
| 0.4009 | 438.1 | 9200 | 0.6487 | 0.6964 | 0.6972 |
| 0.3987 | 447.62 | 9400 | 0.6530 | 0.6945 | 0.6953 |
| 0.3987 | 457.14 | 9600 | 0.6516 | 0.6973 | 0.6979 |
| 0.3981 | 466.67 | 9800 | 0.6497 | 0.6959 | 0.6966 |
| 0.3978 | 476.19 | 10000 | 0.6509 | 0.6961 | 0.6968 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:10:20+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_32768\_512\_30M-L32\_all
================================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5802
* F1 Score: 0.7077
* Accuracy: 0.7077
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Gemma-2B-Code-it-Ties
Gemma-2B-Code-it-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mhenrichsen/gemma-2b-it](https://huggingface.co/mhenrichsen/gemma-2b-it)
* [omparghale/gemma-2b-it-code-finetuned](https://huggingface.co/omparghale/gemma-2b-it-code-finetuned)
## 🧩 Configuration
```yaml
models:
- model: mhenrichsen/gemma-2b-it
parameters:
density: 0.5
weight: 0.5
- model: omparghale/gemma-2b-it-code-finetuned
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: omparghale/gemma-2b-it-code-finetuned
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "JoPmt/Gemma-2B-Code-it-Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "mhenrichsen/gemma-2b-it", "omparghale/gemma-2b-it-code-finetuned"], "base_model": ["mhenrichsen/gemma-2b-it", "omparghale/gemma-2b-it-code-finetuned"]} | JoPmt/Gemma-2B-Code-it-Ties | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mhenrichsen/gemma-2b-it",
"omparghale/gemma-2b-it-code-finetuned",
"base_model:mhenrichsen/gemma-2b-it",
"base_model:omparghale/gemma-2b-it-code-finetuned",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:10:50+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #mhenrichsen/gemma-2b-it #omparghale/gemma-2b-it-code-finetuned #base_model-mhenrichsen/gemma-2b-it #base_model-omparghale/gemma-2b-it-code-finetuned #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Gemma-2B-Code-it-Ties
Gemma-2B-Code-it-Ties is a merge of the following models using LazyMergekit:
* mhenrichsen/gemma-2b-it
* omparghale/gemma-2b-it-code-finetuned
## Configuration
## Usage
| [
"# Gemma-2B-Code-it-Ties\n\nGemma-2B-Code-it-Ties is a merge of the following models using LazyMergekit:\n* mhenrichsen/gemma-2b-it\n* omparghale/gemma-2b-it-code-finetuned",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #mhenrichsen/gemma-2b-it #omparghale/gemma-2b-it-code-finetuned #base_model-mhenrichsen/gemma-2b-it #base_model-omparghale/gemma-2b-it-code-finetuned #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Gemma-2B-Code-it-Ties\n\nGemma-2B-Code-it-Ties is a merge of the following models using LazyMergekit:\n* mhenrichsen/gemma-2b-it\n* omparghale/gemma-2b-it-code-finetuned",
"## Configuration",
"## Usage"
] |
null | null |
# NeuralsynthesisT3q-7B
NeuralsynthesisT3q-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: Kukedlc/NeuralSynthesis-7B-v0.1
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/NeuralsynthesisT3q-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/NeuralsynthesisT3q-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T19:12:39+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# NeuralsynthesisT3q-7B
NeuralsynthesisT3q-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# NeuralsynthesisT3q-7B\n\nNeuralsynthesisT3q-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# NeuralsynthesisT3q-7B\n\nNeuralsynthesisT3q-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2015
- Accuracy: 0.9358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.9 | 7 | 0.6619 | 0.7798 |
| 0.8646 | 1.94 | 15 | 0.2871 | 0.9358 |
| 0.2736 | 2.97 | 23 | 0.2083 | 0.9358 |
| 0.127 | 4.0 | 31 | 0.2093 | 0.9358 |
| 0.127 | 4.52 | 35 | 0.2015 | 0.9358 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-eurosat", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9357798165137615, "name": "Accuracy"}]}]}]} | ravipratap366/swin-tiny-patch4-window7-224-finetuned-eurosat | null | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:13:43+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| swin-tiny-patch4-window7-224-finetuned-eurosat
==============================================
This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2015
* Accuracy: 0.9358
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4680
- F1 Score: 0.8062
- Accuracy: 0.8063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.618 | 8.33 | 200 | 0.5592 | 0.7123 | 0.7142 |
| 0.541 | 16.67 | 400 | 0.5251 | 0.7426 | 0.7448 |
| 0.5114 | 25.0 | 600 | 0.5093 | 0.7519 | 0.7522 |
| 0.4936 | 33.33 | 800 | 0.4965 | 0.7569 | 0.7581 |
| 0.4754 | 41.67 | 1000 | 0.4868 | 0.7668 | 0.7672 |
| 0.4544 | 50.0 | 1200 | 0.4795 | 0.7745 | 0.7745 |
| 0.4342 | 58.33 | 1400 | 0.4522 | 0.7833 | 0.7834 |
| 0.4176 | 66.67 | 1600 | 0.4488 | 0.7894 | 0.7895 |
| 0.4057 | 75.0 | 1800 | 0.4439 | 0.7917 | 0.7917 |
| 0.3952 | 83.33 | 2000 | 0.4478 | 0.7936 | 0.7936 |
| 0.3871 | 91.67 | 2200 | 0.4497 | 0.7959 | 0.7963 |
| 0.3776 | 100.0 | 2400 | 0.4399 | 0.8012 | 0.8012 |
| 0.3711 | 108.33 | 2600 | 0.4353 | 0.8017 | 0.8017 |
| 0.3654 | 116.67 | 2800 | 0.4345 | 0.8015 | 0.8015 |
| 0.3584 | 125.0 | 3000 | 0.4348 | 0.8015 | 0.8017 |
| 0.354 | 133.33 | 3200 | 0.4339 | 0.8060 | 0.8061 |
| 0.3489 | 141.67 | 3400 | 0.4364 | 0.8037 | 0.8037 |
| 0.3447 | 150.0 | 3600 | 0.4398 | 0.8089 | 0.8090 |
| 0.341 | 158.33 | 3800 | 0.4451 | 0.8038 | 0.8039 |
| 0.3382 | 166.67 | 4000 | 0.4394 | 0.8062 | 0.8063 |
| 0.3351 | 175.0 | 4200 | 0.4541 | 0.8049 | 0.8052 |
| 0.3318 | 183.33 | 4400 | 0.4508 | 0.8058 | 0.8059 |
| 0.3294 | 191.67 | 4600 | 0.4343 | 0.8049 | 0.8049 |
| 0.3261 | 200.0 | 4800 | 0.4413 | 0.8079 | 0.8079 |
| 0.3245 | 208.33 | 5000 | 0.4326 | 0.8065 | 0.8066 |
| 0.3207 | 216.67 | 5200 | 0.4343 | 0.8054 | 0.8054 |
| 0.3208 | 225.0 | 5400 | 0.4327 | 0.8057 | 0.8057 |
| 0.3177 | 233.33 | 5600 | 0.4406 | 0.8095 | 0.8095 |
| 0.3168 | 241.67 | 5800 | 0.4348 | 0.8087 | 0.8088 |
| 0.3155 | 250.0 | 6000 | 0.4452 | 0.8119 | 0.8120 |
| 0.3128 | 258.33 | 6200 | 0.4359 | 0.8097 | 0.8098 |
| 0.3105 | 266.67 | 6400 | 0.4438 | 0.8102 | 0.8103 |
| 0.3096 | 275.0 | 6600 | 0.4382 | 0.8100 | 0.8100 |
| 0.3083 | 283.33 | 6800 | 0.4491 | 0.8072 | 0.8073 |
| 0.307 | 291.67 | 7000 | 0.4454 | 0.8103 | 0.8103 |
| 0.3069 | 300.0 | 7200 | 0.4434 | 0.8098 | 0.8098 |
| 0.3059 | 308.33 | 7400 | 0.4481 | 0.8090 | 0.8091 |
| 0.3047 | 316.67 | 7600 | 0.4496 | 0.8100 | 0.8101 |
| 0.3046 | 325.0 | 7800 | 0.4463 | 0.8078 | 0.8079 |
| 0.3034 | 333.33 | 8000 | 0.4487 | 0.8088 | 0.8090 |
| 0.3036 | 341.67 | 8200 | 0.4439 | 0.8101 | 0.8101 |
| 0.3011 | 350.0 | 8400 | 0.4427 | 0.8094 | 0.8095 |
| 0.2997 | 358.33 | 8600 | 0.4445 | 0.8104 | 0.8105 |
| 0.2993 | 366.67 | 8800 | 0.4507 | 0.8105 | 0.8106 |
| 0.2999 | 375.0 | 9000 | 0.4438 | 0.8119 | 0.8120 |
| 0.2988 | 383.33 | 9200 | 0.4423 | 0.8116 | 0.8117 |
| 0.2985 | 391.67 | 9400 | 0.4464 | 0.8109 | 0.8110 |
| 0.2977 | 400.0 | 9600 | 0.4449 | 0.8096 | 0.8096 |
| 0.2971 | 408.33 | 9800 | 0.4435 | 0.8108 | 0.8108 |
| 0.2971 | 416.67 | 10000 | 0.4455 | 0.8107 | 0.8108 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:14:58+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_32768\_512\_30M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4680
* F1 Score: 0.8062
* Accuracy: 0.8063
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6185
- F1 Score: 0.6770
- Accuracy: 0.6770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5703 | 66.67 | 200 | 0.6738 | 0.6834 | 0.6835 |
| 0.3551 | 133.33 | 400 | 0.8666 | 0.6499 | 0.6509 |
| 0.2535 | 200.0 | 600 | 1.0121 | 0.6551 | 0.6558 |
| 0.2052 | 266.67 | 800 | 1.0742 | 0.6705 | 0.6705 |
| 0.1763 | 333.33 | 1000 | 1.1675 | 0.6615 | 0.6623 |
| 0.1593 | 400.0 | 1200 | 1.1848 | 0.6655 | 0.6656 |
| 0.1441 | 466.67 | 1400 | 1.2696 | 0.6623 | 0.6623 |
| 0.1341 | 533.33 | 1600 | 1.2217 | 0.6534 | 0.6542 |
| 0.1226 | 600.0 | 1800 | 1.2791 | 0.6521 | 0.6525 |
| 0.1131 | 666.67 | 2000 | 1.3547 | 0.6615 | 0.6623 |
| 0.1053 | 733.33 | 2200 | 1.4403 | 0.6606 | 0.6607 |
| 0.0983 | 800.0 | 2400 | 1.3545 | 0.6704 | 0.6705 |
| 0.0913 | 866.67 | 2600 | 1.3394 | 0.6570 | 0.6574 |
| 0.085 | 933.33 | 2800 | 1.3084 | 0.6656 | 0.6656 |
| 0.08 | 1000.0 | 3000 | 1.3640 | 0.6621 | 0.6623 |
| 0.0741 | 1066.67 | 3200 | 1.4828 | 0.6525 | 0.6525 |
| 0.0703 | 1133.33 | 3400 | 1.4836 | 0.6667 | 0.6672 |
| 0.0653 | 1200.0 | 3600 | 1.3585 | 0.6557 | 0.6558 |
| 0.0605 | 1266.67 | 3800 | 1.4755 | 0.6601 | 0.6607 |
| 0.0582 | 1333.33 | 4000 | 1.4585 | 0.6554 | 0.6558 |
| 0.0539 | 1400.0 | 4200 | 1.4922 | 0.6621 | 0.6623 |
| 0.0509 | 1466.67 | 4400 | 1.5583 | 0.6652 | 0.6656 |
| 0.0468 | 1533.33 | 4600 | 1.5430 | 0.6605 | 0.6607 |
| 0.0446 | 1600.0 | 4800 | 1.6034 | 0.6584 | 0.6591 |
| 0.0431 | 1666.67 | 5000 | 1.5866 | 0.6704 | 0.6705 |
| 0.0404 | 1733.33 | 5200 | 1.6643 | 0.6619 | 0.6623 |
| 0.0385 | 1800.0 | 5400 | 1.6955 | 0.6555 | 0.6558 |
| 0.037 | 1866.67 | 5600 | 1.6562 | 0.6589 | 0.6591 |
| 0.0345 | 1933.33 | 5800 | 1.5925 | 0.6688 | 0.6688 |
| 0.0341 | 2000.0 | 6000 | 1.6446 | 0.6687 | 0.6688 |
| 0.0318 | 2066.67 | 6200 | 1.7061 | 0.6721 | 0.6721 |
| 0.0311 | 2133.33 | 6400 | 1.7485 | 0.6754 | 0.6754 |
| 0.0298 | 2200.0 | 6600 | 1.6967 | 0.6705 | 0.6705 |
| 0.0293 | 2266.67 | 6800 | 1.7111 | 0.6737 | 0.6737 |
| 0.0277 | 2333.33 | 7000 | 1.7223 | 0.6603 | 0.6607 |
| 0.028 | 2400.0 | 7200 | 1.7235 | 0.6769 | 0.6770 |
| 0.027 | 2466.67 | 7400 | 1.7405 | 0.6721 | 0.6721 |
| 0.0258 | 2533.33 | 7600 | 1.7076 | 0.6655 | 0.6656 |
| 0.025 | 2600.0 | 7800 | 1.7493 | 0.6623 | 0.6623 |
| 0.0239 | 2666.67 | 8000 | 1.7949 | 0.6574 | 0.6574 |
| 0.0239 | 2733.33 | 8200 | 1.7411 | 0.6623 | 0.6623 |
| 0.023 | 2800.0 | 8400 | 1.7730 | 0.6655 | 0.6656 |
| 0.0226 | 2866.67 | 8600 | 1.8543 | 0.6606 | 0.6607 |
| 0.0234 | 2933.33 | 8800 | 1.8294 | 0.6623 | 0.6623 |
| 0.0223 | 3000.0 | 9000 | 1.8947 | 0.6671 | 0.6672 |
| 0.0224 | 3066.67 | 9200 | 1.7957 | 0.6688 | 0.6688 |
| 0.022 | 3133.33 | 9400 | 1.8120 | 0.6672 | 0.6672 |
| 0.0215 | 3200.0 | 9600 | 1.8440 | 0.6705 | 0.6705 |
| 0.0212 | 3266.67 | 9800 | 1.8092 | 0.6639 | 0.6639 |
| 0.0216 | 3333.33 | 10000 | 1.8211 | 0.6672 | 0.6672 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:15:01+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_32768\_512\_30M-L32\_all
==============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6185
* F1 Score: 0.6770
* Accuracy: 0.6770
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-PixelCopter-v01", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "13.20 +/- 14.49", "name": "mean_reward", "verified": false}]}]}]} | lacknerm/Reinforce-PixelCopter-v01 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-16T19:17:44+00:00 | [] | [] | TAGS
#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing Pixelcopter-PLE-v0
This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_2_0_Supertypes_Czert-B-base-cased
This model is a fine-tuned version of [UWB-AIR/Czert-B-base-cased](https://huggingface.co/UWB-AIR/Czert-B-base-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2429
- Precision: 0.8320
- Recall: 0.8860
- F1: 0.8582
- Accuracy: 0.9590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 113 | 0.2231 | 0.7053 | 0.7472 | 0.7256 | 0.9363 |
| No log | 2.0 | 226 | 0.1791 | 0.7584 | 0.8170 | 0.7866 | 0.9490 |
| No log | 3.0 | 339 | 0.1746 | 0.7742 | 0.8385 | 0.8051 | 0.9508 |
| No log | 4.0 | 452 | 0.1783 | 0.7836 | 0.8509 | 0.8158 | 0.9512 |
| 0.2584 | 5.0 | 565 | 0.1742 | 0.7902 | 0.8558 | 0.8217 | 0.9541 |
| 0.2584 | 6.0 | 678 | 0.1653 | 0.8044 | 0.8645 | 0.8334 | 0.9565 |
| 0.2584 | 7.0 | 791 | 0.1694 | 0.8103 | 0.8715 | 0.8398 | 0.9579 |
| 0.2584 | 8.0 | 904 | 0.1838 | 0.8001 | 0.8678 | 0.8326 | 0.9556 |
| 0.0804 | 9.0 | 1017 | 0.1804 | 0.8204 | 0.8753 | 0.8469 | 0.9571 |
| 0.0804 | 10.0 | 1130 | 0.1918 | 0.8196 | 0.8761 | 0.8469 | 0.9576 |
| 0.0804 | 11.0 | 1243 | 0.2018 | 0.8169 | 0.8790 | 0.8468 | 0.9578 |
| 0.0804 | 12.0 | 1356 | 0.2067 | 0.8220 | 0.8815 | 0.8507 | 0.9579 |
| 0.0804 | 13.0 | 1469 | 0.2060 | 0.8285 | 0.8876 | 0.8570 | 0.9585 |
| 0.049 | 14.0 | 1582 | 0.2084 | 0.8271 | 0.8815 | 0.8534 | 0.9589 |
| 0.049 | 15.0 | 1695 | 0.2171 | 0.8257 | 0.8806 | 0.8523 | 0.9585 |
| 0.049 | 16.0 | 1808 | 0.2246 | 0.8307 | 0.8839 | 0.8565 | 0.9586 |
| 0.049 | 17.0 | 1921 | 0.2225 | 0.8288 | 0.8881 | 0.8574 | 0.9590 |
| 0.0338 | 18.0 | 2034 | 0.2272 | 0.8351 | 0.8889 | 0.8611 | 0.9598 |
| 0.0338 | 19.0 | 2147 | 0.2307 | 0.8337 | 0.8864 | 0.8593 | 0.9593 |
| 0.0338 | 20.0 | 2260 | 0.2387 | 0.8302 | 0.8864 | 0.8574 | 0.9588 |
| 0.0338 | 21.0 | 2373 | 0.2387 | 0.8338 | 0.8868 | 0.8595 | 0.9585 |
| 0.0338 | 22.0 | 2486 | 0.2400 | 0.8343 | 0.8881 | 0.8603 | 0.9592 |
| 0.0261 | 23.0 | 2599 | 0.2422 | 0.8319 | 0.8872 | 0.8587 | 0.9590 |
| 0.0261 | 24.0 | 2712 | 0.2431 | 0.8317 | 0.8860 | 0.858 | 0.9589 |
| 0.0261 | 25.0 | 2825 | 0.2429 | 0.8320 | 0.8860 | 0.8582 | 0.9590 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "UWB-AIR/Czert-B-base-cased", "model-index": [{"name": "CNEC_2_0_Supertypes_Czert-B-base-cased", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8320403413498836, "name": "Precision"}, {"type": "recall", "value": 0.885997521685254, "name": "Recall"}, {"type": "f1", "value": 0.8581716343268654, "name": "F1"}, {"type": "accuracy", "value": 0.9590005719187875, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_2_0_Supertypes_Czert-B-base-cased | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:UWB-AIR/Czert-B-base-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:18:50+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-UWB-AIR/Czert-B-base-cased #model-index #autotrain_compatible #endpoints_compatible #region-us
| CNEC\_2\_0\_Supertypes\_Czert-B-base-cased
==========================================
This model is a fine-tuned version of UWB-AIR/Czert-B-base-cased on the cnec dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2429
* Precision: 0.8320
* Recall: 0.8860
* F1: 0.8582
* Accuracy: 0.9590
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 25
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-UWB-AIR/Czert-B-base-cased #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
null | peft | ## Training procedure
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | SongLingRan2001/LLaVA-v1.5-7b-finetune-Geometry-lora | null | [
"peft",
"llava",
"region:us"
] | null | 2024-04-16T19:20:03+00:00 | [] | [] | TAGS
#peft #llava #region-us
| ## Training procedure
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #llava #region-us \n",
"## Training procedure",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "bigscience/bloom-560m"} | YvanCarre/BLOOM_PREFIX_TUNING_CAUSALLM | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-560m",
"region:us"
] | null | 2024-04-16T19:20:24+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-bigscience/bloom-560m #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-bigscience/bloom-560m #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"} | rhshah/MusicalMistral | null | [
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-04-16T19:20:42+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7921
- F1 Score: 0.6109
- Accuracy: 0.6157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6669 | 15.38 | 200 | 0.6564 | 0.6018 | 0.6209 |
| 0.6237 | 30.77 | 400 | 0.6675 | 0.6049 | 0.6124 |
| 0.6016 | 46.15 | 600 | 0.6769 | 0.6032 | 0.6073 |
| 0.5813 | 61.54 | 800 | 0.6803 | 0.6148 | 0.6160 |
| 0.564 | 76.92 | 1000 | 0.6923 | 0.6157 | 0.6191 |
| 0.5511 | 92.31 | 1200 | 0.6984 | 0.6123 | 0.6160 |
| 0.5416 | 107.69 | 1400 | 0.7005 | 0.6088 | 0.6067 |
| 0.5358 | 123.08 | 1600 | 0.6990 | 0.6083 | 0.6070 |
| 0.5294 | 138.46 | 1800 | 0.7103 | 0.6053 | 0.6030 |
| 0.5227 | 153.85 | 2000 | 0.7092 | 0.6184 | 0.6215 |
| 0.5173 | 169.23 | 2200 | 0.7106 | 0.6107 | 0.6091 |
| 0.511 | 184.62 | 2400 | 0.7387 | 0.6152 | 0.6133 |
| 0.5057 | 200.0 | 2600 | 0.7199 | 0.6193 | 0.6209 |
| 0.5002 | 215.38 | 2800 | 0.7438 | 0.6129 | 0.6142 |
| 0.4944 | 230.77 | 3000 | 0.6941 | 0.6138 | 0.6160 |
| 0.4877 | 246.15 | 3200 | 0.7362 | 0.6150 | 0.6145 |
| 0.4816 | 261.54 | 3400 | 0.7525 | 0.6141 | 0.6121 |
| 0.4742 | 276.92 | 3600 | 0.7391 | 0.6167 | 0.6145 |
| 0.4677 | 292.31 | 3800 | 0.7558 | 0.6121 | 0.6112 |
| 0.4618 | 307.69 | 4000 | 0.7583 | 0.6130 | 0.6109 |
| 0.4559 | 323.08 | 4200 | 0.7509 | 0.6124 | 0.6103 |
| 0.4482 | 338.46 | 4400 | 0.7798 | 0.6134 | 0.6109 |
| 0.4432 | 353.85 | 4600 | 0.7722 | 0.6212 | 0.6233 |
| 0.4363 | 369.23 | 4800 | 0.7902 | 0.6168 | 0.6154 |
| 0.4314 | 384.62 | 5000 | 0.7872 | 0.6096 | 0.6070 |
| 0.4241 | 400.0 | 5200 | 0.8017 | 0.6181 | 0.6160 |
| 0.419 | 415.38 | 5400 | 0.8123 | 0.6098 | 0.6076 |
| 0.4152 | 430.77 | 5600 | 0.8174 | 0.6196 | 0.6188 |
| 0.41 | 446.15 | 5800 | 0.8006 | 0.6079 | 0.6057 |
| 0.4058 | 461.54 | 6000 | 0.8135 | 0.6126 | 0.6121 |
| 0.4006 | 476.92 | 6200 | 0.8217 | 0.6112 | 0.6100 |
| 0.3985 | 492.31 | 6400 | 0.8443 | 0.6116 | 0.6094 |
| 0.3932 | 507.69 | 6600 | 0.8227 | 0.6105 | 0.6094 |
| 0.3883 | 523.08 | 6800 | 0.8264 | 0.6120 | 0.6109 |
| 0.3879 | 538.46 | 7000 | 0.8364 | 0.6064 | 0.6048 |
| 0.3823 | 553.85 | 7200 | 0.8486 | 0.6063 | 0.6048 |
| 0.3806 | 569.23 | 7400 | 0.8531 | 0.5989 | 0.5964 |
| 0.3792 | 584.62 | 7600 | 0.8350 | 0.6022 | 0.6012 |
| 0.3754 | 600.0 | 7800 | 0.8628 | 0.6013 | 0.6 |
| 0.3728 | 615.38 | 8000 | 0.8711 | 0.6000 | 0.5979 |
| 0.3717 | 630.77 | 8200 | 0.8604 | 0.6006 | 0.5994 |
| 0.368 | 646.15 | 8400 | 0.8705 | 0.6014 | 0.5997 |
| 0.3676 | 661.54 | 8600 | 0.8586 | 0.5992 | 0.5976 |
| 0.3653 | 676.92 | 8800 | 0.8609 | 0.6032 | 0.6027 |
| 0.3651 | 692.31 | 9000 | 0.8545 | 0.5997 | 0.5991 |
| 0.3638 | 707.69 | 9200 | 0.8610 | 0.5992 | 0.5979 |
| 0.3626 | 723.08 | 9400 | 0.8776 | 0.6006 | 0.5994 |
| 0.3617 | 738.46 | 9600 | 0.8714 | 0.5972 | 0.5955 |
| 0.3606 | 753.85 | 9800 | 0.8742 | 0.5971 | 0.5958 |
| 0.3613 | 769.23 | 10000 | 0.8712 | 0.5982 | 0.5970 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:21:50+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_32768\_512\_30M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7921
* F1 Score: 0.6109
* Accuracy: 0.6157
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Kousha/animated_pikachu_LORA
<Gallery />
## Model description
These are Kousha/animated_pikachu_LORA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of PIK Pikachu to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Kousha/animated_pikachu_LORA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of PIK Pikachu", "widget": []} | Kousha/animated_pikachu_LORA | null | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-16T19:23:11+00:00 | [] | [] | TAGS
#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - Kousha/animated_pikachu_LORA
<Gallery />
## Model description
These are Kousha/animated_pikachu_LORA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of PIK Pikachu to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# SDXL LoRA DreamBooth - Kousha/animated_pikachu_LORA\n\n<Gallery />",
"## Model description\n\nThese are Kousha/animated_pikachu_LORA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of PIK Pikachu to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - Kousha/animated_pikachu_LORA\n\n<Gallery />",
"## Model description\n\nThese are Kousha/animated_pikachu_LORA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of PIK Pikachu to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers |
# Zephyr RMU
Zephyr 7B model with hazardous knowledge about biosecurity and cybersecurity "unlearned" using Representation Misdirection for Unlearning (RMU). For more details, please check [our paper](https://arxiv.org/abs/2403.03218).
## Model sources
- Base model: [zephyr-7B-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
- Repository: [https://github.com/centerforaisafety/wmdp](https://github.com/centerforaisafety/wmdp)
- Website: [https://www.wmdp.ai/](https://www.wmdp.ai/)
- Corpora used for unlearning: [https://huggingface.co/datasets/cais/wmdp-corpora](https://huggingface.co/datasets/cais/wmdp-corpora)
## Performance
Zephyr RMU has been evaluated on [WMDP](https://huggingface.co/datasets/cais/wmdp), [MMLU](https://huggingface.co/datasets/cais/mmlu) and [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench). Higher accuracy on MMLU and MT-Bench, and lower accuracy on WMDP are preferred.
| | WMDP-Bio | WMDP-Cyber | MMLU | MT-Bench |
|------------|:---------:|:----------:|:------:|:--------:|
| Zephyr 7B | 63.7 | 44.0 | 58.1 | 7.33 |
| Zephyr RMU | 31.2 | 28.2 | 57.1 | 7.10 |
## Citation
If you find this useful in your research, please consider citing our paper:
```
@misc{li2024wmdp,
title={The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning},
author={Nathaniel Li and Alexander Pan and Anjali Gopal and Summer Yue and Daniel Berrios and Alice Gatti and Justin D. Li and Ann-Kathrin Dombrowski and Shashwat Goel and Long Phan and Gabriel Mukobi and Nathan Helm-Burger and Rassin Lababidi and Lennart Justen and Andrew B. Liu and Michael Chen and Isabelle Barrass and Oliver Zhang and Xiaoyuan Zhu and Rishub Tamirisa and Bhrugu Bharathi and Adam Khoja and Zhenqi Zhao and Ariel Herbert-Voss and Cort B. Breuer and Sam Marks and Oam Patel and Andy Zou and Mantas Mazeika and Zifan Wang and Palash Oswal and Weiran Liu and Adam A. Hunt and Justin Tienken-Harder and Kevin Y. Shih and Kemper Talley and John Guan and Russell Kaplan and Ian Steneker and David Campbell and Brad Jokubaitis and Alex Levinson and Jean Wang and William Qian and Kallol Krishna Karmakar and Steven Basart and Stephen Fitz and Mindy Levine and Ponnurangam Kumaraguru and Uday Tupakula and Vijay Varadharajan and Yan Shoshitaishvili and Jimmy Ba and Kevin M. Esvelt and Alexandr Wang and Dan Hendrycks},
year={2024},
eprint={2403.03218},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | {"language": ["en"], "license": "mit", "library_name": "transformers", "datasets": ["cais/wmdp", "cais/wmdp-corpora"], "pipeline_tag": "text-generation", "arxiv": ["arxiv.org/abs/2403.03218"]} | cais/Zephyr_RMU | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:cais/wmdp",
"dataset:cais/wmdp-corpora",
"arxiv:2403.03218",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:24:11+00:00 | [
"2403.03218"
] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #en #dataset-cais/wmdp #dataset-cais/wmdp-corpora #arxiv-2403.03218 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Zephyr RMU
==========
Zephyr 7B model with hazardous knowledge about biosecurity and cybersecurity "unlearned" using Representation Misdirection for Unlearning (RMU). For more details, please check our paper.
Model sources
-------------
* Base model: zephyr-7B-beta
* Repository: URL
* Website: URL
* Corpora used for unlearning: URL
Performance
-----------
Zephyr RMU has been evaluated on WMDP, MMLU and MT-Bench. Higher accuracy on MMLU and MT-Bench, and lower accuracy on WMDP are preferred.
If you find this useful in your research, please consider citing our paper:
| [] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #en #dataset-cais/wmdp #dataset-cais/wmdp-corpora #arxiv-2403.03218 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Mixtral 8x7B Instruct RMU
Mixtral 8x7B Instruct model with hazardous knowledge about biosecurity and cybersecurity "unlearned" using Representation Misdirection for Unlearning (RMU). For more details, please check [our paper](https://arxiv.org/abs/2403.03218).
## Model sources
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Repository: [https://github.com/centerforaisafety/wmdp](https://github.com/centerforaisafety/wmdp)
- Website: [https://www.wmdp.ai/](https://www.wmdp.ai/)
- Corpora used for unlearning: [https://huggingface.co/datasets/cais/wmdp-corpora](https://huggingface.co/datasets/cais/wmdp-corpora)
## Performance
Mixtral 8x7B Instruct RMU has been evaluated on [WMDP](https://huggingface.co/datasets/cais/wmdp), [MMLU](https://huggingface.co/datasets/cais/mmlu) and [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench). Higher accuracy on MMLU and MT-Bench, and lower accuracy on WMDP are preferred.
| | WMDP-Bio | WMDP-Cyber | MMLU | MT-Bench |
|---------------------------|:---------:|:----------:|:------:|:--------:|
| Mixtral 8x7B Instruct | 74.8 | 52.0 | 68.2 | 8.30 |
| Mixtral 8x7B Instruct RMU | 34.0 | 30.8 | 67.1 | 8.17 |
## Citation
If you find this useful in your research, please consider citing our paper:
```
@misc{li2024wmdp,
title={The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning},
author={Nathaniel Li and Alexander Pan and Anjali Gopal and Summer Yue and Daniel Berrios and Alice Gatti and Justin D. Li and Ann-Kathrin Dombrowski and Shashwat Goel and Long Phan and Gabriel Mukobi and Nathan Helm-Burger and Rassin Lababidi and Lennart Justen and Andrew B. Liu and Michael Chen and Isabelle Barrass and Oliver Zhang and Xiaoyuan Zhu and Rishub Tamirisa and Bhrugu Bharathi and Adam Khoja and Zhenqi Zhao and Ariel Herbert-Voss and Cort B. Breuer and Sam Marks and Oam Patel and Andy Zou and Mantas Mazeika and Zifan Wang and Palash Oswal and Weiran Liu and Adam A. Hunt and Justin Tienken-Harder and Kevin Y. Shih and Kemper Talley and John Guan and Russell Kaplan and Ian Steneker and David Campbell and Brad Jokubaitis and Alex Levinson and Jean Wang and William Qian and Kallol Krishna Karmakar and Steven Basart and Stephen Fitz and Mindy Levine and Ponnurangam Kumaraguru and Uday Tupakula and Vijay Varadharajan and Yan Shoshitaishvili and Jimmy Ba and Kevin M. Esvelt and Alexandr Wang and Dan Hendrycks},
year={2024},
eprint={2403.03218},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | {"language": ["en"], "license": "mit", "library_name": "transformers", "datasets": ["cais/wmdp", "cais/wmdp-corpora"], "pipeline_tag": "text-generation", "arxiv": ["arxiv.org/abs/2403.03218"]} | cais/Mixtral-8x7B-Instruct_RMU | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:cais/wmdp",
"dataset:cais/wmdp-corpora",
"arxiv:2403.03218",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:25:06+00:00 | [
"2403.03218"
] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #conversational #en #dataset-cais/wmdp #dataset-cais/wmdp-corpora #arxiv-2403.03218 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Mixtral 8x7B Instruct RMU
=========================
Mixtral 8x7B Instruct model with hazardous knowledge about biosecurity and cybersecurity "unlearned" using Representation Misdirection for Unlearning (RMU). For more details, please check our paper.
Model sources
-------------
* Base model: Mixtral-8x7B-Instruct-v0.1
* Repository: URL
* Website: URL
* Corpora used for unlearning: URL
Performance
-----------
Mixtral 8x7B Instruct RMU has been evaluated on WMDP, MMLU and MT-Bench. Higher accuracy on MMLU and MT-Bench, and lower accuracy on WMDP are preferred.
If you find this useful in your research, please consider citing our paper:
| [] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #en #dataset-cais/wmdp #dataset-cais/wmdp-corpora #arxiv-2403.03218 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
<img src="./Goku-8x22b-v0.1.webp" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Goku-8x22B-v0.2 (Goku 141b-A35b)
A fine-tuned version of [v2ray/Mixtral-8x22B-v0.1](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1) model on the following datasets:
- teknium/OpenHermes-2.5
- WizardLM/WizardLM_evol_instruct_V2_196k
- microsoft/orca-math-word-problems-200k
This model has a total of 141b parameters with 35b only active. The major difference in this version is that the model was trained on more datasets and with an `8192 sequence length`. This results in the model being able to generate longer and more coherent responses.
## How to use it
**Use a pipeline as a high-level helper:**
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="MaziyarPanahi/Goku-8x22B-v0.2")
```
**Load model directly:**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Goku-8x22B-v0.2")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Goku-8x22B-v0.2")
```
| {"language": ["fr", "it", "de", "es", "en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "mixtral", "sharegpt", "axolotl"], "datasets": ["MaziyarPanahi/WizardLM_evol_instruct_V2_196k", "microsoft/orca-math-word-problems-200k", "teknium/OpenHermes-2.5"], "model_name": "Goku-8x22B-v0.2", "base_model": "v2ray/Mixtral-8x22B-v0.1", "inference": false, "model_creator": "MaziyarPanahi", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"} | MaziyarPanahi/Goku-8x22B-v0.2 | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"sharegpt",
"axolotl",
"conversational",
"fr",
"it",
"de",
"es",
"en",
"dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:teknium/OpenHermes-2.5",
"base_model:v2ray/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:25:10+00:00 | [] | [
"fr",
"it",
"de",
"es",
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #moe #sharegpt #axolotl #conversational #fr #it #de #es #en #dataset-MaziyarPanahi/WizardLM_evol_instruct_V2_196k #dataset-microsoft/orca-math-word-problems-200k #dataset-teknium/OpenHermes-2.5 #base_model-v2ray/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
<img src="./Goku-8x22b-v0.1.webp" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Goku-8x22B-v0.2 (Goku 141b-A35b)
A fine-tuned version of v2ray/Mixtral-8x22B-v0.1 model on the following datasets:
- teknium/OpenHermes-2.5
- WizardLM/WizardLM_evol_instruct_V2_196k
- microsoft/orca-math-word-problems-200k
This model has a total of 141b parameters with 35b only active. The major difference in this version is that the model was trained on more datasets and with an '8192 sequence length'. This results in the model being able to generate longer and more coherent responses.
## How to use it
Use a pipeline as a high-level helper:
Load model directly:
| [
"# Goku-8x22B-v0.2 (Goku 141b-A35b)\n\nA fine-tuned version of v2ray/Mixtral-8x22B-v0.1 model on the following datasets:\n\n- teknium/OpenHermes-2.5\n- WizardLM/WizardLM_evol_instruct_V2_196k\n- microsoft/orca-math-word-problems-200k\n\nThis model has a total of 141b parameters with 35b only active. The major difference in this version is that the model was trained on more datasets and with an '8192 sequence length'. This results in the model being able to generate longer and more coherent responses.",
"## How to use it\n\n\nUse a pipeline as a high-level helper:\n\n\nLoad model directly:"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #sharegpt #axolotl #conversational #fr #it #de #es #en #dataset-MaziyarPanahi/WizardLM_evol_instruct_V2_196k #dataset-microsoft/orca-math-word-problems-200k #dataset-teknium/OpenHermes-2.5 #base_model-v2ray/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n",
"# Goku-8x22B-v0.2 (Goku 141b-A35b)\n\nA fine-tuned version of v2ray/Mixtral-8x22B-v0.1 model on the following datasets:\n\n- teknium/OpenHermes-2.5\n- WizardLM/WizardLM_evol_instruct_V2_196k\n- microsoft/orca-math-word-problems-200k\n\nThis model has a total of 141b parameters with 35b only active. The major difference in this version is that the model was trained on more datasets and with an '8192 sequence length'. This results in the model being able to generate longer and more coherent responses.",
"## How to use it\n\n\nUse a pipeline as a high-level helper:\n\n\nLoad model directly:"
] |
sentence-similarity | sentence-transformers |
# sbastola/muril-base-cased-sentence-transformer-snli-nepali
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sbastola/muril-base-cased-sentence-transformer-snli-nepali')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sbastola/muril-base-cased-sentence-transformer-snli-nepali')
model = AutoModel.from_pretrained('sbastola/muril-base-cased-sentence-transformer-snli-nepali')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sbastola/muril-base-cased-sentence-transformer-snli-nepali)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 859 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.LabelAccuracyEvaluator.LabelAccuracyEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 171,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["stanfordnlp/snli"], "pipeline_tag": "sentence-similarity"} | sbastola/muril-base-cased-sentence-transformer-snli-nepali | null | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:stanfordnlp/snli",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:26:18+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-stanfordnlp/snli #endpoints_compatible #region-us
|
# sbastola/muril-base-cased-sentence-transformer-snli-nepali
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 859 with parameters:
Loss:
'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# sbastola/muril-base-cased-sentence-transformer-snli-nepali\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 859 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-stanfordnlp/snli #endpoints_compatible #region-us \n",
"# sbastola/muril-base-cased-sentence-transformer-snli-nepali\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 859 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation | transformers | # pythontestmerge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Testing training data validation:
* Model Stock 3/4 Loss: 0.451
My hypothesis that the pretraining was dragging down the stock merge's performance on training data in any way seems inaccurate.
Cosmopedia data validation:
* Model Stock 3/4 Loss: 1.021
On the other hand, it indeed may have pulled it towards forgetfulness.
This is a better loss vs catastrophic forgetting than the prior Model Stock or any of the training methods.
I'm going to estimate that using the base model as an anchor point is a strong remedy for catastrophic forgetting when using multiple different training methods on the same dataset.
Less sure I can say anything about how it affects adaptation to the new dataset. It's possible that if using this method, you'd want louder/stronger adaptation to start with than you otherwise would.
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [HuggingFaceTB/cosmo-1b](https://huggingface.co/HuggingFaceTB/cosmo-1b) as a base.
### Models Merged
The following models were included in the merge:
* [Lambent/cosmo-1b-galore-pythontest](https://huggingface.co/Lambent/cosmo-1b-galore-pythontest)
* [Lambent/cosmo-1b-qlora-pythontest](https://huggingface.co/Lambent/cosmo-1b-qlora-pythontest)
* [Lambent/cosmo-1b-lisa-pythontest](https://huggingface.co/Lambent/cosmo-1b-lisa-pythontest)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Lambent/cosmo-1b-lisa-pythontest
- model: Lambent/cosmo-1b-qlora-pythontest
- model: Lambent/cosmo-1b-galore-pythontest
base_model: HuggingFaceTB/cosmo-1b
merge_method: model_stock
parameters:
filter_wise: false
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["HuggingFaceTB/cosmo-1b", "Lambent/cosmo-1b-galore-pythontest", "Lambent/cosmo-1b-qlora-pythontest", "Lambent/cosmo-1b-lisa-pythontest"]} | Lambent/cosmo-1b-stock-pythontest-0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:HuggingFaceTB/cosmo-1b",
"base_model:Lambent/cosmo-1b-galore-pythontest",
"base_model:Lambent/cosmo-1b-qlora-pythontest",
"base_model:Lambent/cosmo-1b-lisa-pythontest",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:26:36+00:00 | [
"2403.19522"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-HuggingFaceTB/cosmo-1b #base_model-Lambent/cosmo-1b-galore-pythontest #base_model-Lambent/cosmo-1b-qlora-pythontest #base_model-Lambent/cosmo-1b-lisa-pythontest #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # pythontestmerge
This is a merge of pre-trained language models created using mergekit.
Testing training data validation:
* Model Stock 3/4 Loss: 0.451
My hypothesis that the pretraining was dragging down the stock merge's performance on training data in any way seems inaccurate.
Cosmopedia data validation:
* Model Stock 3/4 Loss: 1.021
On the other hand, it indeed may have pulled it towards forgetfulness.
This is a better loss vs catastrophic forgetting than the prior Model Stock or any of the training methods.
I'm going to estimate that using the base model as an anchor point is a strong remedy for catastrophic forgetting when using multiple different training methods on the same dataset.
Less sure I can say anything about how it affects adaptation to the new dataset. It's possible that if using this method, you'd want louder/stronger adaptation to start with than you otherwise would.
## Merge Details
### Merge Method
This model was merged using the Model Stock merge method using HuggingFaceTB/cosmo-1b as a base.
### Models Merged
The following models were included in the merge:
* Lambent/cosmo-1b-galore-pythontest
* Lambent/cosmo-1b-qlora-pythontest
* Lambent/cosmo-1b-lisa-pythontest
### Configuration
The following YAML configuration was used to produce this model:
| [
"# pythontestmerge\n\nThis is a merge of pre-trained language models created using mergekit.\n\nTesting training data validation:\n\n* Model Stock 3/4 Loss: 0.451\n\nMy hypothesis that the pretraining was dragging down the stock merge's performance on training data in any way seems inaccurate.\n\nCosmopedia data validation:\n\n* Model Stock 3/4 Loss: 1.021\n\nOn the other hand, it indeed may have pulled it towards forgetfulness.\nThis is a better loss vs catastrophic forgetting than the prior Model Stock or any of the training methods.\n\nI'm going to estimate that using the base model as an anchor point is a strong remedy for catastrophic forgetting when using multiple different training methods on the same dataset.\nLess sure I can say anything about how it affects adaptation to the new dataset. It's possible that if using this method, you'd want louder/stronger adaptation to start with than you otherwise would.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using HuggingFaceTB/cosmo-1b as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Lambent/cosmo-1b-galore-pythontest\n* Lambent/cosmo-1b-qlora-pythontest\n* Lambent/cosmo-1b-lisa-pythontest",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-HuggingFaceTB/cosmo-1b #base_model-Lambent/cosmo-1b-galore-pythontest #base_model-Lambent/cosmo-1b-qlora-pythontest #base_model-Lambent/cosmo-1b-lisa-pythontest #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# pythontestmerge\n\nThis is a merge of pre-trained language models created using mergekit.\n\nTesting training data validation:\n\n* Model Stock 3/4 Loss: 0.451\n\nMy hypothesis that the pretraining was dragging down the stock merge's performance on training data in any way seems inaccurate.\n\nCosmopedia data validation:\n\n* Model Stock 3/4 Loss: 1.021\n\nOn the other hand, it indeed may have pulled it towards forgetfulness.\nThis is a better loss vs catastrophic forgetting than the prior Model Stock or any of the training methods.\n\nI'm going to estimate that using the base model as an anchor point is a strong remedy for catastrophic forgetting when using multiple different training methods on the same dataset.\nLess sure I can say anything about how it affects adaptation to the new dataset. It's possible that if using this method, you'd want louder/stronger adaptation to start with than you otherwise would.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using HuggingFaceTB/cosmo-1b as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Lambent/cosmo-1b-galore-pythontest\n* Lambent/cosmo-1b-qlora-pythontest\n* Lambent/cosmo-1b-lisa-pythontest",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6665
- F1 Score: 0.5935
- Accuracy: 0.5950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6608 | 16.67 | 200 | 0.6688 | 0.5830 | 0.5849 |
| 0.6194 | 33.33 | 400 | 0.6900 | 0.5828 | 0.5934 |
| 0.5959 | 50.0 | 600 | 0.7114 | 0.5714 | 0.5718 |
| 0.5746 | 66.67 | 800 | 0.7161 | 0.5633 | 0.5787 |
| 0.558 | 83.33 | 1000 | 0.7291 | 0.5732 | 0.5823 |
| 0.548 | 100.0 | 1200 | 0.7339 | 0.5769 | 0.5803 |
| 0.538 | 116.67 | 1400 | 0.7479 | 0.5699 | 0.5797 |
| 0.5308 | 133.33 | 1600 | 0.7544 | 0.5749 | 0.5852 |
| 0.5256 | 150.0 | 1800 | 0.7416 | 0.5739 | 0.5754 |
| 0.5204 | 166.67 | 2000 | 0.7724 | 0.5704 | 0.5725 |
| 0.5153 | 183.33 | 2200 | 0.7760 | 0.5734 | 0.5793 |
| 0.5096 | 200.0 | 2400 | 0.7782 | 0.5706 | 0.5705 |
| 0.5041 | 216.67 | 2600 | 0.7728 | 0.5696 | 0.5715 |
| 0.4984 | 233.33 | 2800 | 0.7850 | 0.5767 | 0.5787 |
| 0.4913 | 250.0 | 3000 | 0.7775 | 0.5751 | 0.5813 |
| 0.4854 | 266.67 | 3200 | 0.8025 | 0.5772 | 0.5806 |
| 0.4772 | 283.33 | 3400 | 0.8047 | 0.5702 | 0.5709 |
| 0.4707 | 300.0 | 3600 | 0.8007 | 0.5700 | 0.5715 |
| 0.4629 | 316.67 | 3800 | 0.8262 | 0.5763 | 0.5803 |
| 0.4552 | 333.33 | 4000 | 0.8503 | 0.5810 | 0.5859 |
| 0.4479 | 350.0 | 4200 | 0.8234 | 0.5796 | 0.5826 |
| 0.441 | 366.67 | 4400 | 0.8767 | 0.5711 | 0.5686 |
| 0.4338 | 383.33 | 4600 | 0.8656 | 0.5815 | 0.5806 |
| 0.4261 | 400.0 | 4800 | 0.8597 | 0.5772 | 0.5819 |
| 0.4209 | 416.67 | 5000 | 0.8654 | 0.5792 | 0.5780 |
| 0.4147 | 433.33 | 5200 | 0.8706 | 0.5761 | 0.5771 |
| 0.4083 | 450.0 | 5400 | 0.8888 | 0.5779 | 0.5780 |
| 0.4014 | 466.67 | 5600 | 0.9073 | 0.5804 | 0.5823 |
| 0.3962 | 483.33 | 5800 | 0.8958 | 0.5792 | 0.5800 |
| 0.3921 | 500.0 | 6000 | 0.8910 | 0.5742 | 0.5748 |
| 0.3862 | 516.67 | 6200 | 0.8982 | 0.5761 | 0.5771 |
| 0.3832 | 533.33 | 6400 | 0.9326 | 0.5758 | 0.5748 |
| 0.3779 | 550.0 | 6600 | 0.9293 | 0.5825 | 0.5803 |
| 0.3745 | 566.67 | 6800 | 0.8957 | 0.5734 | 0.5751 |
| 0.3687 | 583.33 | 7000 | 0.9473 | 0.5762 | 0.5754 |
| 0.3664 | 600.0 | 7200 | 0.9376 | 0.5755 | 0.5751 |
| 0.3628 | 616.67 | 7400 | 0.9207 | 0.5765 | 0.5761 |
| 0.3612 | 633.33 | 7600 | 0.9328 | 0.5718 | 0.5725 |
| 0.3575 | 650.0 | 7800 | 0.9478 | 0.5724 | 0.5764 |
| 0.3557 | 666.67 | 8000 | 0.9472 | 0.5767 | 0.5784 |
| 0.3527 | 683.33 | 8200 | 0.9612 | 0.5772 | 0.5787 |
| 0.35 | 700.0 | 8400 | 0.9611 | 0.5745 | 0.5745 |
| 0.3474 | 716.67 | 8600 | 0.9589 | 0.5735 | 0.5748 |
| 0.3469 | 733.33 | 8800 | 0.9504 | 0.5742 | 0.5754 |
| 0.3448 | 750.0 | 9000 | 0.9533 | 0.5758 | 0.5754 |
| 0.343 | 766.67 | 9200 | 0.9629 | 0.5722 | 0.5728 |
| 0.3412 | 783.33 | 9400 | 0.9752 | 0.5737 | 0.5725 |
| 0.3408 | 800.0 | 9600 | 0.9562 | 0.5716 | 0.5722 |
| 0.3395 | 816.67 | 9800 | 0.9659 | 0.5753 | 0.5751 |
| 0.3393 | 833.33 | 10000 | 0.9661 | 0.5760 | 0.5761 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:26:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_32768\_512\_30M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6665
* F1 Score: 0.5935
* Accuracy: 0.5950
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null | ## Upstream model config
```json
{
"_name_or_path": "output/hermes-llama2-4k/checkpoint-2259",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.32.0.dev0",
"use_cache": false,
"vocab_size": 32000
}
```
### Dataset
```
DATASET = "abideen/Cosmopedia-100k-pretrain" # @param
from datasets import load_dataset
# converted to BitLinear
class BitLinear(nn.Linear):
def forward(self, x):
w = self.weight # a weight tensor with shape [d, k]
x = x.to(w.device)
RMSNorm = LlamaRMSNorm(x.shape[-1]).to(w.device)
x_norm = RMSNorm(x)
# A trick for implementing Straight−Through−Estimator (STE) using detach()
x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach()
w_quant = w + (weight_quant(w) - w).detach()
y = F.linear(x_quant, w_quant)
return y
### Create the llama model with our custom config. Convert it to bitnet.
model = LlamaForCausalLM(config)
convert_to_bitnet(model, copy_weights=False)
```
### Training
```python
args = TrainingArguments(
output_dir=output_path,
per_device_train_batch_size=BATCH_SIZE,
logging_steps=100,
gradient_accumulation_steps=2,
num_train_epochs=EPOCHS,
weight_decay=0.01,
warmup_steps=0.1,
lr_scheduler_type="cosine",
learning_rate=LEARNING_RATE,
# max_steps=5000,
save_steps=0.25,
fp16=True,
report_to="wandb"
)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
args=args,
data_collator=data_collator,
train_dataset=tokenized_data["train"],
)
trainer.train()
```
### Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.models.llama.modeling_llama import *
# Load a pretrained BitNet model
model = "saadnaeem/Llama2-70M-Cosmopedia-100k-Pretrain"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model)
def activation_quant(x):
scale = 127.0 / x.abs().max(dim=-1, keepdim=True).values.clamp_(min=1e-5)
y = (x * scale).round().clamp_(-128, 127)
y = y / scale
return y
def weight_quant(w):
scale = 1.0 / w.abs().mean().clamp_(min=1e-5)
u = (w * scale).round().clamp_(-1, 1)
u = u / scale
return u
class BitLinear(nn.Linear):
def forward(self, x):
w = self.weight # a weight tensor with shape [d, k]
x = x.to(w.device)
RMSNorm = LlamaRMSNorm(x.shape[-1]).to(w.device)
x_norm = RMSNorm(x)
# A trick for implementing Straight−Through−Estimator (STE) using detach()
x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach()
w_quant = w + (weight_quant(w) - w).detach()
y = F.linear(x_quant, w_quant)
return y
def convert_to_bitnet(model, copy_weights):
for name, module in model.named_modules():
# Replace linear layers with BitNet
if isinstance(module, LlamaSdpaAttention) or isinstance(module, LlamaMLP):
for child_name, child_module in module.named_children():
if isinstance(child_module, nn.Linear):
bitlinear = BitLinear(child_module.in_features, child_module.out_features, child_module.bias is not None).to(device="cuda:0")
if copy_weights:
bitlinear.weight = child_module.weight
if child_module.bias is not None:
bitlinear.bias = child_module.bias
setattr(module, child_name, bitlinear)
# Remove redundant input_layernorms
elif isinstance(module, LlamaDecoderLayer):
for child_name, child_module in module.named_children():
if isinstance(child_module, LlamaRMSNorm) and child_name == "input_layernorm":
setattr(module, child_name, nn.Identity().to(device="cuda:0"))
convert_to_bitnet(model, copy_weights=True)
model.to(device="cuda:0")
prompt = "What is Machine Learning?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
generate_ids = model.generate(inputs.input_ids, max_length=50)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
``` | {"license": "apache-2.0"} | saadnaeem/Llama2-70M-Cosmopedia-100k-Pretrain | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T19:28:42+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| ## Upstream model config
### Dataset
### Training
### Inference
| [
"## Upstream model config",
"### Dataset",
"### Training",
"### Inference"
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"## Upstream model config",
"### Dataset",
"### Training",
"### Inference"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mergekit-community/mergekit-ties-vjlpsxw
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-ties-vjlpsxw-GGUF/resolve/main/mergekit-ties-vjlpsxw.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "mergekit-community/mergekit-ties-vjlpsxw", "quantized_by": "mradermacher"} | mradermacher/mergekit-ties-vjlpsxw-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/mergekit-ties-vjlpsxw",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:30:06+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #en #base_model-mergekit-community/mergekit-ties-vjlpsxw #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #mergekit #merge #en #base_model-mergekit-community/mergekit-ties-vjlpsxw #endpoints_compatible #region-us \n"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_1_1_Czert-B-base-cased
This model is a fine-tuned version of [UWB-AIR/Czert-B-base-cased](https://huggingface.co/UWB-AIR/Czert-B-base-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3330
- Precision: 0.8261
- Recall: 0.8623
- F1: 0.8438
- Accuracy: 0.9410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5787 | 1.7 | 500 | 0.3008 | 0.7659 | 0.7943 | 0.7798 | 0.9262 |
| 0.2266 | 3.4 | 1000 | 0.2606 | 0.8026 | 0.8437 | 0.8226 | 0.9374 |
| 0.1443 | 5.1 | 1500 | 0.2565 | 0.8189 | 0.8525 | 0.8354 | 0.9407 |
| 0.1004 | 6.8 | 2000 | 0.2807 | 0.8129 | 0.8539 | 0.8329 | 0.9400 |
| 0.0759 | 8.5 | 2500 | 0.2989 | 0.8255 | 0.8627 | 0.8437 | 0.9411 |
| 0.0563 | 10.2 | 3000 | 0.3181 | 0.8251 | 0.8578 | 0.8411 | 0.9402 |
| 0.0475 | 11.9 | 3500 | 0.3279 | 0.8204 | 0.8609 | 0.8402 | 0.9404 |
| 0.0378 | 13.61 | 4000 | 0.3330 | 0.8261 | 0.8623 | 0.8438 | 0.9410 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "UWB-AIR/Czert-B-base-cased", "model-index": [{"name": "CNEC_1_1_Czert-B-base-cased", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8261421319796954, "name": "Precision"}, {"type": "recall", "value": 0.8622516556291391, "name": "Recall"}, {"type": "f1", "value": 0.8438107582631237, "name": "F1"}, {"type": "accuracy", "value": 0.9410182516810759, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_1_1_Czert-B-base-cased | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:UWB-AIR/Czert-B-base-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:31:07+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-UWB-AIR/Czert-B-base-cased #model-index #autotrain_compatible #endpoints_compatible #region-us
| CNEC\_1\_1\_Czert-B-base-cased
==============================
This model is a fine-tuned version of UWB-AIR/Czert-B-base-cased on the cnec dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3330
* Precision: 0.8261
* Recall: 0.8623
* F1: 0.8438
* Accuracy: 0.9410
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-UWB-AIR/Czert-B-base-cased #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | SObryadchikov/t5-large-calculator | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:31:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ulasfiliz954/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | ulasfiliz954/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null | 2024-04-16T19:31:55+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: ulasfiliz954/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: ulasfiliz954/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: ulasfiliz954/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0841 | 1.0 | 2406 | 1.9362 |
| 1.9866 | 2.0 | 4812 | 1.8845 |
| 1.9442 | 3.0 | 7218 | 1.8355 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilroberta-base", "model-index": [{"name": "distilroberta-base-finetuned-wikitext2", "results": []}]} | Anwesh0127/distilroberta-base-finetuned-wikitext2 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:32:26+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilroberta-base-finetuned-wikitext2
======================================
This model is a fine-tuned version of distilroberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8611
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Shen-Wang/rash_img_LoRA
<Gallery />
## Model description
These are Shen-Wang/rash_img_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of skin rash to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Shen-Wang/rash_img_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of skin rash", "widget": []} | Shen-Wang/rash_img_LoRA | null | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"dora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-16T19:32:26+00:00 | [] | [] | TAGS
#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #dora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - Shen-Wang/rash_img_LoRA
<Gallery />
## Model description
These are Shen-Wang/rash_img_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of skin rash to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# SDXL LoRA DreamBooth - Shen-Wang/rash_img_LoRA\n\n<Gallery />",
"## Model description\n\nThese are Shen-Wang/rash_img_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of skin rash to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #dora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - Shen-Wang/rash_img_LoRA\n\n<Gallery />",
"## Model description\n\nThese are Shen-Wang/rash_img_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of skin rash to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tapt_amazon_helpfulness_classification
This model is a fine-tuned version of [BigTMiami/tapt_helpfulness_base_pretraining_model_final](https://huggingface.co/BigTMiami/tapt_helpfulness_base_pretraining_model_final) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3873
- Accuracy: 0.87
- F1 Macro: 0.6868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.3226 | 1.0 | 1563 | 0.3149 | 0.8688 | 0.6717 |
| 0.2854 | 2.0 | 3126 | 0.3745 | 0.8682 | 0.6249 |
| 0.2016 | 3.0 | 4689 | 0.3873 | 0.87 | 0.6868 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "BigTMiami/tapt_helpfulness_base_pretraining_model_final", "model-index": [{"name": "tapt_amazon_helpfulness_classification", "results": []}]} | BigTMiami/tapt_amazon_helpfulness_classification | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:BigTMiami/tapt_helpfulness_base_pretraining_model_final",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:33:11+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/tapt_helpfulness_base_pretraining_model_final #license-mit #autotrain_compatible #endpoints_compatible #region-us
| tapt\_amazon\_helpfulness\_classification
=========================================
This model is a fine-tuned version of BigTMiami/tapt\_helpfulness\_base\_pretraining\_model\_final on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3873
* Accuracy: 0.87
* F1 Macro: 0.6868
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.06
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/tapt_helpfulness_base_pretraining_model_final #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification | transformers | ## Usage
```python
import torch
from informer_models import InformerConfig, InformerForSequenceClassification
model = InformerForSequenceClassification.from_pretrained("BrachioLab/supernova-classification")
model.to(device)
model.eval()
y_true = []
y_pred = []
for i, batch in enumerate(test_dataloader):
print(f"processing batch {i}")
batch = {k: v.to(device) for k, v in batch.items() if k != "objid"}
with torch.no_grad():
outputs = model(**batch)
y_true.extend(batch['labels'].cpu().numpy())
y_pred.extend(torch.argmax(outputs.logits, dim=2).squeeze().cpu().numpy())
print(f"accuracy: {sum([1 for i, j in zip(y_true, y_pred) if i == j]) / len(y_true)}")
``` | {"license": "mit"} | BrachioLab/supernova-classification | null | [
"transformers",
"pytorch",
"informer",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:34:16+00:00 | [] | [] | TAGS
#transformers #pytorch #informer #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
| ## Usage
| [
"## Usage"
] | [
"TAGS\n#transformers #pytorch #informer #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## Usage"
] |
text-generation | transformers |
# Spaetzle-v65-7b
Spaetzle-v65-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B)
## 🧩 Configuration
```yaml
models:
- model: cstr/spaetzle-v62-7b
# no parameters necessary for base model
- model: yleo/EmertonMonarch-7B
parameters:
density: 0.60
weight: 0.3
merge_method: dare_ties
base_model: cstr/spaetzle-v62-7b
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v65-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "yleo/EmertonMonarch-7B"], "base_model": ["yleo/EmertonMonarch-7B"]} | cstr/Spaetzle-v65-7b | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yleo/EmertonMonarch-7B",
"conversational",
"base_model:yleo/EmertonMonarch-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:36:36+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #yleo/EmertonMonarch-7B #conversational #base_model-yleo/EmertonMonarch-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Spaetzle-v65-7b
Spaetzle-v65-7b is a merge of the following models using LazyMergekit:
* yleo/EmertonMonarch-7B
## Configuration
## Usage
| [
"# Spaetzle-v65-7b\n\nSpaetzle-v65-7b is a merge of the following models using LazyMergekit:\n* yleo/EmertonMonarch-7B",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #yleo/EmertonMonarch-7B #conversational #base_model-yleo/EmertonMonarch-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Spaetzle-v65-7b\n\nSpaetzle-v65-7b is a merge of the following models using LazyMergekit:\n* yleo/EmertonMonarch-7B",
"## Configuration",
"## Usage"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-petco-filtered_fontsize-ctr
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0022
- Mse: 0.0022
- Rmse: 0.0464
- Mae: 0.0364
- R2: 0.4468
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:-------:|:--------:|
| 0.0403 | 1.0 | 15 | 0.0119 | 0.0119 | 0.1091 | 0.0921 | -2.0564 | 0.3 |
| 0.0167 | 2.0 | 30 | 0.0031 | 0.0031 | 0.0555 | 0.0449 | 0.2100 | 0.6 |
| 0.0125 | 3.0 | 45 | 0.0037 | 0.0037 | 0.0610 | 0.0514 | 0.0429 | 0.5167 |
| 0.0098 | 4.0 | 60 | 0.0026 | 0.0026 | 0.0508 | 0.0407 | 0.3377 | 0.6333 |
| 0.0105 | 5.0 | 75 | 0.0029 | 0.0029 | 0.0543 | 0.0413 | 0.2439 | 0.7 |
| 0.0083 | 6.0 | 90 | 0.0029 | 0.0029 | 0.0535 | 0.0392 | 0.2644 | 0.7 |
| 0.0052 | 7.0 | 105 | 0.0026 | 0.0026 | 0.0512 | 0.0432 | 0.3280 | 0.7 |
| 0.0044 | 8.0 | 120 | 0.0023 | 0.0023 | 0.0482 | 0.0376 | 0.4025 | 0.7667 |
| 0.0045 | 9.0 | 135 | 0.0037 | 0.0037 | 0.0605 | 0.0519 | 0.0601 | 0.4333 |
| 0.0033 | 10.0 | 150 | 0.0028 | 0.0028 | 0.0525 | 0.0367 | 0.2909 | 0.7 |
| 0.0035 | 11.0 | 165 | 0.0024 | 0.0024 | 0.0487 | 0.0403 | 0.3918 | 0.7667 |
| 0.003 | 12.0 | 180 | 0.0022 | 0.0022 | 0.0471 | 0.0367 | 0.4305 | 0.75 |
| 0.0023 | 13.0 | 195 | 0.0022 | 0.0022 | 0.0464 | 0.0364 | 0.4468 | 0.8 |
| 0.0022 | 14.0 | 210 | 0.0024 | 0.0024 | 0.0489 | 0.0387 | 0.3863 | 0.7167 |
| 0.0025 | 15.0 | 225 | 0.0024 | 0.0024 | 0.0492 | 0.0379 | 0.3792 | 0.75 |
| 0.0021 | 16.0 | 240 | 0.0025 | 0.0025 | 0.0497 | 0.0358 | 0.3667 | 0.7833 |
| 0.0021 | 17.0 | 255 | 0.0024 | 0.0024 | 0.0486 | 0.0366 | 0.3943 | 0.7667 |
| 0.0017 | 18.0 | 270 | 0.0024 | 0.0024 | 0.0485 | 0.0354 | 0.3950 | 0.7833 |
| 0.0017 | 19.0 | 285 | 0.0028 | 0.0028 | 0.0529 | 0.0388 | 0.2819 | 0.75 |
| 0.0018 | 20.0 | 300 | 0.0025 | 0.0025 | 0.0496 | 0.0363 | 0.3676 | 0.7667 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "bert-petco-filtered_fontsize-ctr", "results": []}]} | yimiwang/bert-petco-filtered_fontsize-ctr | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:37:49+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-petco-filtered\_fontsize-ctr
=================================
This model is a fine-tuned version of google-bert/bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0022
* Mse: 0.0022
* Rmse: 0.0464
* Mae: 0.0364
* R2: 0.4468
* Accuracy: 0.8
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "955.60 +/- 133.20", "name": "mean_reward", "verified": false}]}]}]} | spietari/Reinforce-CartPole-v1 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-16T19:38:36+00:00 | [] | [] | TAGS
#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing CartPole-v1
This is a trained model of a Reinforce agent playing CartPole-v1 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8403
- F1 Score: 0.6129
- Accuracy: 0.6121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6575 | 18.18 | 200 | 0.6541 | 0.6116 | 0.6243 |
| 0.6046 | 36.36 | 400 | 0.6717 | 0.6134 | 0.6132 |
| 0.577 | 54.55 | 600 | 0.6915 | 0.6083 | 0.6078 |
| 0.551 | 72.73 | 800 | 0.7053 | 0.6042 | 0.6035 |
| 0.532 | 90.91 | 1000 | 0.7198 | 0.6074 | 0.6071 |
| 0.5212 | 109.09 | 1200 | 0.7222 | 0.6038 | 0.6042 |
| 0.5135 | 127.27 | 1400 | 0.7194 | 0.6041 | 0.6049 |
| 0.5049 | 145.45 | 1600 | 0.7334 | 0.6044 | 0.6038 |
| 0.4986 | 163.64 | 1800 | 0.7398 | 0.6087 | 0.6089 |
| 0.4907 | 181.82 | 2000 | 0.7383 | 0.6118 | 0.6114 |
| 0.4848 | 200.0 | 2200 | 0.7424 | 0.6092 | 0.6092 |
| 0.4762 | 218.18 | 2400 | 0.7477 | 0.6036 | 0.6074 |
| 0.4697 | 236.36 | 2600 | 0.7567 | 0.6063 | 0.6056 |
| 0.4623 | 254.55 | 2800 | 0.7701 | 0.6132 | 0.6128 |
| 0.4549 | 272.73 | 3000 | 0.7707 | 0.6135 | 0.6146 |
| 0.446 | 290.91 | 3200 | 0.7721 | 0.6079 | 0.6071 |
| 0.438 | 309.09 | 3400 | 0.7700 | 0.6139 | 0.6132 |
| 0.4307 | 327.27 | 3600 | 0.7789 | 0.6156 | 0.6150 |
| 0.4219 | 345.45 | 3800 | 0.7983 | 0.6170 | 0.6168 |
| 0.4126 | 363.64 | 4000 | 0.8152 | 0.6149 | 0.6146 |
| 0.4049 | 381.82 | 4200 | 0.8316 | 0.6237 | 0.6236 |
| 0.3976 | 400.0 | 4400 | 0.8249 | 0.6217 | 0.6214 |
| 0.3917 | 418.18 | 4600 | 0.8527 | 0.6189 | 0.6182 |
| 0.3854 | 436.36 | 4800 | 0.8669 | 0.6204 | 0.6196 |
| 0.3783 | 454.55 | 5000 | 0.8538 | 0.6158 | 0.6153 |
| 0.3722 | 472.73 | 5200 | 0.8551 | 0.6148 | 0.6142 |
| 0.3662 | 490.91 | 5400 | 0.8766 | 0.6198 | 0.6196 |
| 0.3605 | 509.09 | 5600 | 0.8899 | 0.6181 | 0.6178 |
| 0.3566 | 527.27 | 5800 | 0.8837 | 0.6217 | 0.6211 |
| 0.3503 | 545.45 | 6000 | 0.8930 | 0.6187 | 0.6182 |
| 0.3468 | 563.64 | 6200 | 0.8569 | 0.6085 | 0.6092 |
| 0.3415 | 581.82 | 6400 | 0.8902 | 0.6132 | 0.6128 |
| 0.3367 | 600.0 | 6600 | 0.8935 | 0.6191 | 0.6186 |
| 0.3352 | 618.18 | 6800 | 0.9271 | 0.6139 | 0.6135 |
| 0.3305 | 636.36 | 7000 | 0.8851 | 0.6177 | 0.6171 |
| 0.3278 | 654.55 | 7200 | 0.9267 | 0.6214 | 0.6207 |
| 0.325 | 672.73 | 7400 | 0.9298 | 0.6136 | 0.6128 |
| 0.3207 | 690.91 | 7600 | 0.9300 | 0.6200 | 0.6193 |
| 0.3203 | 709.09 | 7800 | 0.9269 | 0.6184 | 0.6178 |
| 0.3161 | 727.27 | 8000 | 0.9369 | 0.6186 | 0.6178 |
| 0.3155 | 745.45 | 8200 | 0.9372 | 0.6212 | 0.6207 |
| 0.3135 | 763.64 | 8400 | 0.9347 | 0.6184 | 0.6182 |
| 0.3114 | 781.82 | 8600 | 0.9430 | 0.6154 | 0.6153 |
| 0.3098 | 800.0 | 8800 | 0.9383 | 0.6202 | 0.6200 |
| 0.3063 | 818.18 | 9000 | 0.9261 | 0.6174 | 0.6178 |
| 0.3065 | 836.36 | 9200 | 0.9398 | 0.6183 | 0.6178 |
| 0.3044 | 854.55 | 9400 | 0.9388 | 0.6156 | 0.6153 |
| 0.3038 | 872.73 | 9600 | 0.9487 | 0.6204 | 0.6200 |
| 0.3035 | 890.91 | 9800 | 0.9473 | 0.6168 | 0.6164 |
| 0.3035 | 909.09 | 10000 | 0.9427 | 0.6187 | 0.6182 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:39:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_32768\_512\_30M-L32\_all
===================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8403
* F1 Score: 0.6129
* Accuracy: 0.6121
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eulpicard/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | eulpicard/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-16T19:40:39+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7726
- F1 Score: 0.5429
- Accuracy: 0.5429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6878 | 13.33 | 200 | 0.6759 | 0.5607 | 0.5780 |
| 0.6553 | 26.67 | 400 | 0.6908 | 0.5615 | 0.5611 |
| 0.6383 | 40.0 | 600 | 0.7026 | 0.5529 | 0.5535 |
| 0.6209 | 53.33 | 800 | 0.7094 | 0.5584 | 0.5601 |
| 0.6046 | 66.67 | 1000 | 0.7177 | 0.5577 | 0.5573 |
| 0.5936 | 80.0 | 1200 | 0.7312 | 0.5547 | 0.5543 |
| 0.5864 | 93.33 | 1400 | 0.7418 | 0.5547 | 0.5571 |
| 0.5786 | 106.67 | 1600 | 0.7280 | 0.5582 | 0.5579 |
| 0.5737 | 120.0 | 1800 | 0.7467 | 0.5573 | 0.5571 |
| 0.5684 | 133.33 | 2000 | 0.7629 | 0.5485 | 0.5481 |
| 0.5659 | 146.67 | 2200 | 0.7519 | 0.5616 | 0.5617 |
| 0.562 | 160.0 | 2400 | 0.7406 | 0.5566 | 0.5598 |
| 0.5573 | 173.33 | 2600 | 0.7752 | 0.5570 | 0.5590 |
| 0.5531 | 186.67 | 2800 | 0.7599 | 0.5559 | 0.5557 |
| 0.5496 | 200.0 | 3000 | 0.7650 | 0.5574 | 0.5571 |
| 0.5458 | 213.33 | 3200 | 0.7875 | 0.5524 | 0.5524 |
| 0.5411 | 226.67 | 3400 | 0.7659 | 0.5512 | 0.5508 |
| 0.5349 | 240.0 | 3600 | 0.8021 | 0.5563 | 0.5565 |
| 0.5316 | 253.33 | 3800 | 0.8077 | 0.5503 | 0.5524 |
| 0.5251 | 266.67 | 4000 | 0.8030 | 0.5513 | 0.5511 |
| 0.5211 | 280.0 | 4200 | 0.8134 | 0.5559 | 0.5554 |
| 0.5148 | 293.33 | 4400 | 0.8384 | 0.5541 | 0.5538 |
| 0.5114 | 306.67 | 4600 | 0.8187 | 0.5556 | 0.5552 |
| 0.5056 | 320.0 | 4800 | 0.8141 | 0.5534 | 0.5538 |
| 0.4993 | 333.33 | 5000 | 0.8468 | 0.5432 | 0.5451 |
| 0.4944 | 346.67 | 5200 | 0.8079 | 0.5486 | 0.5484 |
| 0.4897 | 360.0 | 5400 | 0.8488 | 0.5504 | 0.55 |
| 0.4854 | 373.33 | 5600 | 0.8461 | 0.5512 | 0.5508 |
| 0.4825 | 386.67 | 5800 | 0.8462 | 0.5412 | 0.5416 |
| 0.4765 | 400.0 | 6000 | 0.8467 | 0.5484 | 0.5481 |
| 0.4726 | 413.33 | 6200 | 0.8437 | 0.5477 | 0.5476 |
| 0.4694 | 426.67 | 6400 | 0.8659 | 0.5466 | 0.5462 |
| 0.4661 | 440.0 | 6600 | 0.8671 | 0.5457 | 0.5454 |
| 0.4614 | 453.33 | 6800 | 0.8881 | 0.5534 | 0.5530 |
| 0.4593 | 466.67 | 7000 | 0.8847 | 0.5483 | 0.5484 |
| 0.4572 | 480.0 | 7200 | 0.8761 | 0.5504 | 0.5503 |
| 0.4537 | 493.33 | 7400 | 0.8906 | 0.5547 | 0.5543 |
| 0.4506 | 506.67 | 7600 | 0.8837 | 0.5531 | 0.5527 |
| 0.4478 | 520.0 | 7800 | 0.8948 | 0.5534 | 0.5530 |
| 0.4456 | 533.33 | 8000 | 0.8908 | 0.5496 | 0.5492 |
| 0.4441 | 546.67 | 8200 | 0.9016 | 0.5488 | 0.5486 |
| 0.4428 | 560.0 | 8400 | 0.8886 | 0.5474 | 0.5470 |
| 0.4413 | 573.33 | 8600 | 0.8980 | 0.5507 | 0.5503 |
| 0.4389 | 586.67 | 8800 | 0.9092 | 0.5473 | 0.5470 |
| 0.4383 | 600.0 | 9000 | 0.8996 | 0.5477 | 0.5476 |
| 0.4369 | 613.33 | 9200 | 0.9017 | 0.5499 | 0.5495 |
| 0.4365 | 626.67 | 9400 | 0.8930 | 0.5483 | 0.5478 |
| 0.4361 | 640.0 | 9600 | 0.8967 | 0.5488 | 0.5484 |
| 0.4337 | 653.33 | 9800 | 0.9047 | 0.5491 | 0.5486 |
| 0.4327 | 666.67 | 10000 | 0.9023 | 0.5483 | 0.5478 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:40:53+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_32768\_512\_30M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7726
* F1 Score: 0.5429
* Accuracy: 0.5429
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0459
- F1 Score: 0.7342
- Accuracy: 0.7344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5861 | 33.33 | 200 | 0.5437 | 0.7234 | 0.7248 |
| 0.4641 | 66.67 | 400 | 0.5703 | 0.7174 | 0.7194 |
| 0.4105 | 100.0 | 600 | 0.5934 | 0.7329 | 0.7324 |
| 0.3642 | 133.33 | 800 | 0.6495 | 0.7326 | 0.7351 |
| 0.3329 | 166.67 | 1000 | 0.6590 | 0.7313 | 0.7317 |
| 0.3117 | 200.0 | 1200 | 0.6878 | 0.7268 | 0.7283 |
| 0.2925 | 233.33 | 1400 | 0.7238 | 0.7243 | 0.7255 |
| 0.2753 | 266.67 | 1600 | 0.7315 | 0.7296 | 0.7303 |
| 0.2617 | 300.0 | 1800 | 0.7670 | 0.7206 | 0.7228 |
| 0.2485 | 333.33 | 2000 | 0.7813 | 0.7230 | 0.7255 |
| 0.2365 | 366.67 | 2200 | 0.7706 | 0.7286 | 0.7296 |
| 0.2237 | 400.0 | 2400 | 0.7795 | 0.7273 | 0.7269 |
| 0.2121 | 433.33 | 2600 | 0.8088 | 0.7315 | 0.7337 |
| 0.2007 | 466.67 | 2800 | 0.8247 | 0.7313 | 0.7331 |
| 0.1887 | 500.0 | 3000 | 0.8669 | 0.7295 | 0.7317 |
| 0.1784 | 533.33 | 3200 | 0.8661 | 0.7305 | 0.7310 |
| 0.1694 | 566.67 | 3400 | 0.8972 | 0.7254 | 0.7276 |
| 0.1597 | 600.0 | 3600 | 0.8964 | 0.7360 | 0.7358 |
| 0.1524 | 633.33 | 3800 | 0.8536 | 0.7294 | 0.7296 |
| 0.1455 | 666.67 | 4000 | 0.9225 | 0.7323 | 0.7331 |
| 0.1386 | 700.0 | 4200 | 0.9030 | 0.7313 | 0.7324 |
| 0.1335 | 733.33 | 4400 | 0.9683 | 0.7299 | 0.7317 |
| 0.1278 | 766.67 | 4600 | 0.9174 | 0.7336 | 0.7337 |
| 0.1217 | 800.0 | 4800 | 0.9963 | 0.7304 | 0.7324 |
| 0.1196 | 833.33 | 5000 | 0.9602 | 0.7362 | 0.7365 |
| 0.1136 | 866.67 | 5200 | 0.9567 | 0.7315 | 0.7337 |
| 0.1117 | 900.0 | 5400 | 0.9742 | 0.7365 | 0.7379 |
| 0.1074 | 933.33 | 5600 | 1.0110 | 0.7284 | 0.7303 |
| 0.1047 | 966.67 | 5800 | 1.0080 | 0.7409 | 0.7413 |
| 0.1019 | 1000.0 | 6000 | 1.0481 | 0.7278 | 0.7303 |
| 0.1 | 1033.33 | 6200 | 0.9904 | 0.7348 | 0.7358 |
| 0.0974 | 1066.67 | 6400 | 1.0570 | 0.7305 | 0.7324 |
| 0.0937 | 1100.0 | 6600 | 1.0391 | 0.7342 | 0.7351 |
| 0.0931 | 1133.33 | 6800 | 1.0236 | 0.7377 | 0.7385 |
| 0.0915 | 1166.67 | 7000 | 1.0456 | 0.7413 | 0.7420 |
| 0.0904 | 1200.0 | 7200 | 1.0669 | 0.7394 | 0.7399 |
| 0.0879 | 1233.33 | 7400 | 1.0512 | 0.7358 | 0.7365 |
| 0.0876 | 1266.67 | 7600 | 1.0633 | 0.7382 | 0.7392 |
| 0.0857 | 1300.0 | 7800 | 1.0594 | 0.7338 | 0.7358 |
| 0.0847 | 1333.33 | 8000 | 1.0823 | 0.7362 | 0.7372 |
| 0.0838 | 1366.67 | 8200 | 1.0787 | 0.7390 | 0.7399 |
| 0.0821 | 1400.0 | 8400 | 1.0722 | 0.7316 | 0.7331 |
| 0.0813 | 1433.33 | 8600 | 1.0928 | 0.7354 | 0.7365 |
| 0.0811 | 1466.67 | 8800 | 1.0917 | 0.7334 | 0.7344 |
| 0.0798 | 1500.0 | 9000 | 1.1213 | 0.7384 | 0.7399 |
| 0.0785 | 1533.33 | 9200 | 1.1047 | 0.7351 | 0.7365 |
| 0.0786 | 1566.67 | 9400 | 1.1006 | 0.7374 | 0.7385 |
| 0.0784 | 1600.0 | 9600 | 1.1020 | 0.7337 | 0.7351 |
| 0.0784 | 1633.33 | 9800 | 1.1018 | 0.7346 | 0.7358 |
| 0.0771 | 1666.67 | 10000 | 1.1062 | 0.7345 | 0.7358 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H4-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:42:40+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H4-seqsight\_32768\_512\_30M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0459
* F1 Score: 0.7342
* Accuracy: 0.7344
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | peft | # Base Model: mistralai/Mistral-7B-Instruct-v0_2_student_answer_train_examples_mistral_0416
* LoRAs weights for Mistral-7b-Instruct-v0_2
# Noteworthy changes:
* reduced training hyperparams: epochs=3 (previously 4)
* new training prompt: "Teenager students write in simple sentences.
You are a teenager student, and please answer the following question. {training example}"
* old training prompt: "Teenager students write in simple sentences [with typos and grammar errors].
You are a teenager student, and please answer the following question. {training example}"
## Model Details
Fine-tuned model that talks like middle school students, using simple vocabulary and grammar.
* Trained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Model Details
Fine-tuned model to talk like middle school students, using typos/grammar errors.
Trained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.
- **Developed by:** Nora T
- **Finetuned from model:** mistralai_Mistral-7B-Instruct-v0.2
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## How to Get Started:
1. Load Mistral model first:
```
from peft import PeftModel # for fine-tuning
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, GenerationConfig, GPTQConfig, BitsAndBytesConfig
model_name_or_path = "mistralai/Mistral-7B-Instruct-v0.2"
nf4_config = BitsAndBytesConfig( # quantization 4-bit
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
quantization_config=nf4_config,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
```
2. Load in LoRA weights:
```
lora_model_path = "{path_to_loras_folder}/mistralai_Mistral-7B-Instruct-v0.2-testgen-LoRAs" # load loras
model = PeftModel.from_pretrained(
model, lora_model_path, torch_dtype=torch.float16, force_download=True,
)
```
## Training Hyperparams
* LoRA Rank: 128
* LoRA Alpha: 32
* Batch Size: 64
* Cutoff Length: 256
* Learning rate: 3e-4
* Epochs: 3
* LoRA Dropout: 0.05
### Training Data
Trained on raw text file
#### Preprocessing [optional]
[More Information Needed]
## Technical Specifications
### Model Architecture and Objective
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
### Framework versions
- PEFT 0.7.1 | {"language": ["en"], "library_name": "peft", "tags": ["education"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "pipeline_tag": "text-generation"} | ntseng/mistralai_Mistral-7B-Instruct-v0_2_student_answer_train_examples_mistral_0416 | null | [
"peft",
"tensorboard",
"safetensors",
"education",
"text-generation",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-16T19:43:32+00:00 | [] | [
"en"
] | TAGS
#peft #tensorboard #safetensors #education #text-generation #en #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
| # Base Model: mistralai/Mistral-7B-Instruct-v0_2_student_answer_train_examples_mistral_0416
* LoRAs weights for Mistral-7b-Instruct-v0_2
# Noteworthy changes:
* reduced training hyperparams: epochs=3 (previously 4)
* new training prompt: "Teenager students write in simple sentences.
You are a teenager student, and please answer the following question. {training example}"
* old training prompt: "Teenager students write in simple sentences [with typos and grammar errors].
You are a teenager student, and please answer the following question. {training example}"
## Model Details
Fine-tuned model that talks like middle school students, using simple vocabulary and grammar.
* Trained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Model Details
Fine-tuned model to talk like middle school students, using typos/grammar errors.
Trained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.
- Developed by: Nora T
- Finetuned from model: mistralai_Mistral-7B-Instruct-v0.2
- Repository:
- Paper [optional]:
- Demo [optional]:
## How to Get Started:
1. Load Mistral model first:
2. Load in LoRA weights:
## Training Hyperparams
* LoRA Rank: 128
* LoRA Alpha: 32
* Batch Size: 64
* Cutoff Length: 256
* Learning rate: 3e-4
* Epochs: 3
* LoRA Dropout: 0.05
### Training Data
Trained on raw text file
#### Preprocessing [optional]
## Technical Specifications
### Model Architecture and Objective
#### Hardware
#### Software
[optional]
### Framework versions
- PEFT 0.7.1 | [
"# Base Model: mistralai/Mistral-7B-Instruct-v0_2_student_answer_train_examples_mistral_0416\n * LoRAs weights for Mistral-7b-Instruct-v0_2",
"# Noteworthy changes: \n * reduced training hyperparams: epochs=3 (previously 4)\n * new training prompt: \"Teenager students write in simple sentences.\n You are a teenager student, and please answer the following question. {training example}\"\n\n * old training prompt: \"Teenager students write in simple sentences [with typos and grammar errors].\n You are a teenager student, and please answer the following question. {training example}\"",
"## Model Details\nFine-tuned model that talks like middle school students, using simple vocabulary and grammar. \n * Trained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Model Details\nFine-tuned model to talk like middle school students, using typos/grammar errors. \nTrained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.\n\n\n- Developed by: Nora T\n- Finetuned from model: mistralai_Mistral-7B-Instruct-v0.2\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## How to Get Started:\n1. Load Mistral model first:\n\n\n2. Load in LoRA weights:",
"## Training Hyperparams\n * LoRA Rank: 128\n * LoRA Alpha: 32\n * Batch Size: 64\n * Cutoff Length: 256\n * Learning rate: 3e-4 \n * Epochs: 3\n * LoRA Dropout: 0.05",
"### Training Data\nTrained on raw text file",
"#### Preprocessing [optional]",
"## Technical Specifications",
"### Model Architecture and Objective",
"#### Hardware",
"#### Software\n\n\n\n[optional]",
"### Framework versions\n\n- PEFT 0.7.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #education #text-generation #en #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Base Model: mistralai/Mistral-7B-Instruct-v0_2_student_answer_train_examples_mistral_0416\n * LoRAs weights for Mistral-7b-Instruct-v0_2",
"# Noteworthy changes: \n * reduced training hyperparams: epochs=3 (previously 4)\n * new training prompt: \"Teenager students write in simple sentences.\n You are a teenager student, and please answer the following question. {training example}\"\n\n * old training prompt: \"Teenager students write in simple sentences [with typos and grammar errors].\n You are a teenager student, and please answer the following question. {training example}\"",
"## Model Details\nFine-tuned model that talks like middle school students, using simple vocabulary and grammar. \n * Trained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Model Details\nFine-tuned model to talk like middle school students, using typos/grammar errors. \nTrained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.\n\n\n- Developed by: Nora T\n- Finetuned from model: mistralai_Mistral-7B-Instruct-v0.2\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## How to Get Started:\n1. Load Mistral model first:\n\n\n2. Load in LoRA weights:",
"## Training Hyperparams\n * LoRA Rank: 128\n * LoRA Alpha: 32\n * Batch Size: 64\n * Cutoff Length: 256\n * Learning rate: 3e-4 \n * Epochs: 3\n * LoRA Dropout: 0.05",
"### Training Data\nTrained on raw text file",
"#### Preprocessing [optional]",
"## Technical Specifications",
"### Model Architecture and Objective",
"#### Hardware",
"#### Software\n\n\n\n[optional]",
"### Framework versions\n\n- PEFT 0.7.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_2_0_Czert-B-base-cased
This model is a fine-tuned version of [UWB-AIR/Czert-B-base-cased](https://huggingface.co/UWB-AIR/Czert-B-base-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3352
- Precision: 0.8093
- Recall: 0.8548
- F1: 0.8314
- Accuracy: 0.9446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5496 | 2.22 | 500 | 0.2782 | 0.7301 | 0.7750 | 0.7519 | 0.9275 |
| 0.2133 | 4.44 | 1000 | 0.2487 | 0.7811 | 0.8219 | 0.8010 | 0.9399 |
| 0.144 | 6.67 | 1500 | 0.2580 | 0.7737 | 0.8290 | 0.8004 | 0.9396 |
| 0.1029 | 8.89 | 2000 | 0.2576 | 0.7997 | 0.8480 | 0.8231 | 0.9446 |
| 0.0776 | 11.11 | 2500 | 0.2849 | 0.7990 | 0.8516 | 0.8244 | 0.9444 |
| 0.0601 | 13.33 | 3000 | 0.2971 | 0.8021 | 0.8523 | 0.8264 | 0.9450 |
| 0.0494 | 15.56 | 3500 | 0.3077 | 0.8014 | 0.8473 | 0.8237 | 0.9440 |
| 0.0408 | 17.78 | 4000 | 0.3145 | 0.8131 | 0.8555 | 0.8337 | 0.9448 |
| 0.0353 | 20.0 | 4500 | 0.3260 | 0.8097 | 0.8569 | 0.8327 | 0.9445 |
| 0.0311 | 22.22 | 5000 | 0.3356 | 0.8076 | 0.8541 | 0.8302 | 0.9441 |
| 0.0281 | 24.44 | 5500 | 0.3352 | 0.8093 | 0.8548 | 0.8314 | 0.9446 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "UWB-AIR/Czert-B-base-cased", "model-index": [{"name": "CNEC_2_0_Czert-B-base-cased", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8093464273620048, "name": "Precision"}, {"type": "recall", "value": 0.8547925608011445, "name": "Recall"}, {"type": "f1", "value": 0.8314489476430683, "name": "F1"}, {"type": "accuracy", "value": 0.9446311123820418, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_2_0_Czert-B-base-cased | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:UWB-AIR/Czert-B-base-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:45:15+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-UWB-AIR/Czert-B-base-cased #model-index #autotrain_compatible #endpoints_compatible #region-us
| CNEC\_2\_0\_Czert-B-base-cased
==============================
This model is a fine-tuned version of UWB-AIR/Czert-B-base-cased on the cnec dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3352
* Precision: 0.8093
* Recall: 0.8548
* F1: 0.8314
* Accuracy: 0.9446
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 25
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-UWB-AIR/Czert-B-base-cased #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": ["trl", "sft"]} | lilyray/falcon_7b_emo_motiv | null | [
"transformers",
"safetensors",
"trl",
"sft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:46:43+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0191
- F1 Score: 0.7354
- Accuracy: 0.7355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5859 | 25.0 | 200 | 0.6038 | 0.6908 | 0.6921 |
| 0.4731 | 50.0 | 400 | 0.6390 | 0.6785 | 0.6787 |
| 0.4236 | 75.0 | 600 | 0.6752 | 0.6862 | 0.6867 |
| 0.3791 | 100.0 | 800 | 0.7006 | 0.6974 | 0.6974 |
| 0.3485 | 125.0 | 1000 | 0.7451 | 0.7025 | 0.7027 |
| 0.3302 | 150.0 | 1200 | 0.7676 | 0.7048 | 0.7047 |
| 0.3114 | 175.0 | 1400 | 0.8013 | 0.7034 | 0.7034 |
| 0.2995 | 200.0 | 1600 | 0.7825 | 0.6971 | 0.6974 |
| 0.2874 | 225.0 | 1800 | 0.8290 | 0.7048 | 0.7054 |
| 0.2785 | 250.0 | 2000 | 0.8333 | 0.6994 | 0.7001 |
| 0.2699 | 275.0 | 2200 | 0.8262 | 0.6990 | 0.6994 |
| 0.2595 | 300.0 | 2400 | 0.8784 | 0.6978 | 0.6981 |
| 0.2501 | 325.0 | 2600 | 0.9020 | 0.6927 | 0.6941 |
| 0.2417 | 350.0 | 2800 | 0.8898 | 0.7000 | 0.7007 |
| 0.231 | 375.0 | 3000 | 0.9151 | 0.7026 | 0.7027 |
| 0.2244 | 400.0 | 3200 | 0.9313 | 0.7054 | 0.7054 |
| 0.2172 | 425.0 | 3400 | 0.9252 | 0.7040 | 0.7041 |
| 0.2077 | 450.0 | 3600 | 0.9324 | 0.7022 | 0.7027 |
| 0.1993 | 475.0 | 3800 | 1.0074 | 0.7067 | 0.7067 |
| 0.193 | 500.0 | 4000 | 1.0003 | 0.7079 | 0.7081 |
| 0.186 | 525.0 | 4200 | 0.9485 | 0.7093 | 0.7094 |
| 0.179 | 550.0 | 4400 | 1.0009 | 0.7100 | 0.7101 |
| 0.174 | 575.0 | 4600 | 0.9871 | 0.7091 | 0.7094 |
| 0.1702 | 600.0 | 4800 | 1.0183 | 0.7028 | 0.7034 |
| 0.1624 | 625.0 | 5000 | 1.0331 | 0.7121 | 0.7121 |
| 0.1581 | 650.0 | 5200 | 1.0385 | 0.7125 | 0.7128 |
| 0.1543 | 675.0 | 5400 | 1.0208 | 0.7045 | 0.7047 |
| 0.1488 | 700.0 | 5600 | 1.0662 | 0.7071 | 0.7074 |
| 0.1451 | 725.0 | 5800 | 1.0748 | 0.7055 | 0.7061 |
| 0.141 | 750.0 | 6000 | 1.1233 | 0.7106 | 0.7108 |
| 0.1387 | 775.0 | 6200 | 1.0768 | 0.7073 | 0.7074 |
| 0.1367 | 800.0 | 6400 | 1.1209 | 0.7147 | 0.7148 |
| 0.1323 | 825.0 | 6600 | 1.1156 | 0.7073 | 0.7074 |
| 0.131 | 850.0 | 6800 | 1.1217 | 0.7093 | 0.7094 |
| 0.1279 | 875.0 | 7000 | 1.1401 | 0.7050 | 0.7054 |
| 0.1259 | 900.0 | 7200 | 1.0993 | 0.7100 | 0.7101 |
| 0.1243 | 925.0 | 7400 | 1.1349 | 0.7113 | 0.7114 |
| 0.1221 | 950.0 | 7600 | 1.1438 | 0.7106 | 0.7108 |
| 0.1204 | 975.0 | 7800 | 1.1263 | 0.7061 | 0.7061 |
| 0.119 | 1000.0 | 8000 | 1.1517 | 0.7072 | 0.7074 |
| 0.116 | 1025.0 | 8200 | 1.1385 | 0.7081 | 0.7081 |
| 0.1168 | 1050.0 | 8400 | 1.1292 | 0.7080 | 0.7081 |
| 0.1143 | 1075.0 | 8600 | 1.1664 | 0.7088 | 0.7088 |
| 0.1146 | 1100.0 | 8800 | 1.1307 | 0.7047 | 0.7047 |
| 0.1126 | 1125.0 | 9000 | 1.1763 | 0.7087 | 0.7088 |
| 0.1119 | 1150.0 | 9200 | 1.1726 | 0.7072 | 0.7074 |
| 0.1127 | 1175.0 | 9400 | 1.1458 | 0.7100 | 0.7101 |
| 0.1104 | 1200.0 | 9600 | 1.1706 | 0.7080 | 0.7081 |
| 0.1101 | 1225.0 | 9800 | 1.1697 | 0.7093 | 0.7094 |
| 0.1096 | 1250.0 | 10000 | 1.1699 | 0.7080 | 0.7081 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H3-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:46:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H3-seqsight\_32768\_512\_30M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0191
* F1 Score: 0.7354
* Accuracy: 0.7355
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-classification-v10 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:47:54+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-reasoning-v10 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:49:06+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | null |
# Gemma-2B-Code-Ties-it
Gemma-2B-Code-Ties-it is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mhenrichsen/gemma-2b-it](https://huggingface.co/mhenrichsen/gemma-2b-it)
* [omparghale/gemma-2b-it-code-finetuned](https://huggingface.co/omparghale/gemma-2b-it-code-finetuned)
## 🧩 Configuration
```yaml
models:
- model: mhenrichsen/gemma-2b-it
parameters:
density: 0.5
weight: 0.5
- model: omparghale/gemma-2b-it-code-finetuned
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: omparghale/gemma-2b-it-code-finetuned
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "JoPmt/Gemma-2B-Code-Ties-it"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "mhenrichsen/gemma-2b-it", "omparghale/gemma-2b-it-code-finetuned"], "base_model": ["mhenrichsen/gemma-2b-it", "omparghale/gemma-2b-it-code-finetuned"]} | JoPmt/Gemma-2B-Code-Ties-it | null | [
"merge",
"mergekit",
"lazymergekit",
"mhenrichsen/gemma-2b-it",
"omparghale/gemma-2b-it-code-finetuned",
"base_model:mhenrichsen/gemma-2b-it",
"base_model:omparghale/gemma-2b-it-code-finetuned",
"region:us"
] | null | 2024-04-16T19:49:45+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #mhenrichsen/gemma-2b-it #omparghale/gemma-2b-it-code-finetuned #base_model-mhenrichsen/gemma-2b-it #base_model-omparghale/gemma-2b-it-code-finetuned #region-us
|
# Gemma-2B-Code-Ties-it
Gemma-2B-Code-Ties-it is a merge of the following models using LazyMergekit:
* mhenrichsen/gemma-2b-it
* omparghale/gemma-2b-it-code-finetuned
## Configuration
## Usage
| [
"# Gemma-2B-Code-Ties-it\n\nGemma-2B-Code-Ties-it is a merge of the following models using LazyMergekit:\n* mhenrichsen/gemma-2b-it\n* omparghale/gemma-2b-it-code-finetuned",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #mhenrichsen/gemma-2b-it #omparghale/gemma-2b-it-code-finetuned #base_model-mhenrichsen/gemma-2b-it #base_model-omparghale/gemma-2b-it-code-finetuned #region-us \n",
"# Gemma-2B-Code-Ties-it\n\nGemma-2B-Code-Ties-it is a merge of the following models using LazyMergekit:\n* mhenrichsen/gemma-2b-it\n* omparghale/gemma-2b-it-code-finetuned",
"## Configuration",
"## Usage"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | yzimmermann/ChemBERTa-77M-MLM-safetensors | null | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:49:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #roberta #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #roberta #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_usp1_dpo9
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5218
- Rewards/chosen: 4.1305
- Rewards/rejected: -4.5499
- Rewards/accuracies: 0.6900
- Rewards/margins: 8.6805
- Logps/rejected: -117.5696
- Logps/chosen: -105.7146
- Logits/rejected: -0.1040
- Logits/chosen: -0.0322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0799 | 2.67 | 100 | 2.1948 | 0.0613 | -2.4589 | 0.6400 | 2.5202 | -115.2462 | -110.2360 | -0.3648 | -0.3265 |
| 0.0245 | 5.33 | 200 | 3.0197 | -6.4081 | -13.9036 | 0.6600 | 7.4954 | -127.9625 | -117.4242 | -0.6260 | -0.5678 |
| 0.0588 | 8.0 | 300 | 2.7834 | -6.7237 | -15.1196 | 0.7200 | 8.3958 | -129.3136 | -117.7749 | -0.3355 | -0.2470 |
| 0.0 | 10.67 | 400 | 2.5742 | 5.1319 | -3.5242 | 0.6600 | 8.6561 | -116.4299 | -104.6020 | -0.1518 | -0.0842 |
| 0.0 | 13.33 | 500 | 2.5287 | 4.1796 | -4.4812 | 0.7100 | 8.6608 | -117.4932 | -105.6601 | -0.1019 | -0.0304 |
| 0.0 | 16.0 | 600 | 2.5247 | 4.1189 | -4.5458 | 0.7000 | 8.6646 | -117.5649 | -105.7276 | -0.1019 | -0.0304 |
| 0.0 | 18.67 | 700 | 2.5257 | 4.1387 | -4.5517 | 0.7000 | 8.6904 | -117.5715 | -105.7055 | -0.1024 | -0.0311 |
| 0.0 | 21.33 | 800 | 2.5440 | 4.1061 | -4.5342 | 0.7000 | 8.6404 | -117.5521 | -105.7417 | -0.1035 | -0.0320 |
| 0.0 | 24.0 | 900 | 2.5242 | 4.1345 | -4.5168 | 0.7100 | 8.6512 | -117.5327 | -105.7102 | -0.1029 | -0.0313 |
| 0.0 | 26.67 | 1000 | 2.5218 | 4.1305 | -4.5499 | 0.6900 | 8.6805 | -117.5696 | -105.7146 | -0.1040 | -0.0322 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_usp1_dpo9", "results": []}]} | guoyu-zhang/model_hh_usp1_dpo9 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-16T19:51:36+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_hh\_usp1\_dpo9
=====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.5218
* Rewards/chosen: 4.1305
* Rewards/rejected: -4.5499
* Rewards/accuracies: 0.6900
* Rewards/margins: 8.6805
* Logps/rejected: -117.5696
* Logps/chosen: -105.7146
* Logits/rejected: -0.1040
* Logits/chosen: -0.0322
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
summarization | transformers | # BART (large-sized model), fine-tuned on scientific_papers
BART Lecture Summarization is a model fine-tuned to summarize lectures, utilizing a dataset of scientific papers due to its similarity in content structure to lectures.
The model employs a custom summarization function tailored specifically for lecture content.
## Intended uses & limitations
The primary use case for the BART Lecture Summarization model is to condense lecture content into concise summaries.
It is designed to assist students, educators, and researchers in extracting key information from lectures for study, reference, or review purposes.
### How to use
Here is how to use this model: I added this function to avoid making the summarization too brief.
```python
import numpy as np
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("MariamMounnir/Bart_SP")
tokenizer = BartTokenizer.from_pretrained("MariamMounnir/Bart_SP")
def summarize(text, maxSummarylength=500):
# Encode the text and summarize
inputs = tokenizer.encode("summarize: " +
text,
return_tensors="pt",
max_length=1024, truncation=True)
summary_ids = model.generate(inputs, max_length=maxSummarylength,
min_length=int(maxSummarylength/5),
length_penalty=10.0,
num_beams=4, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
def split_text_into_pieces(text,
max_tokens=900,
overlapPercent=10):
# Tokenize the text
tokens = tokenizer.tokenize(text)
# Calculate the overlap in tokens
overlap_tokens = int(max_tokens * overlapPercent / 100)
# Split the tokens into chunks of size
# max_tokens with overlap
pieces = [tokens[i:i + max_tokens]
for i in range(0, len(tokens),
max_tokens - overlap_tokens)]
# Convert the token pieces back into text
text_pieces = [tokenizer.decode(
tokenizer.convert_tokens_to_ids(piece),
skip_special_tokens=True) for piece in pieces]
return text_pieces
def recursive_summarize(text, max_length=200):
tokens = tokenizer.tokenize(text)
expectedCountOfChunks = len(tokens)/max_length
max_length=int(len(tokens)/expectedCountOfChunks)+2
# Break the text into pieces of max_length
pieces = split_text_into_pieces(text, max_tokens=max_length)
print("Number of pieces: ", len(pieces))
# Summarize each piece
summaries=[]
k=0
for k in range(0, len(pieces)):
piece=pieces[k]
print("****************************************************")
print("Piece:",(k+1)," out of ", len(pieces), "pieces")
print(piece, "\n")
summary =summarize(piece, maxSummarylength=max_length/3*2)
summaries.append(summary)
print("SUMNMARY: ", summary)
print("****************************************************")
concatenated_summary = ' '.join(summaries)
return concatenated_summary
ARTICLE = """In this class, you learn about the state of the art and also practice implementing machine learning algorithms yourself.
You learn about the most important machine learning algorithms, some of which are exactly what's being used in large AI or large tech
companies today, and you get a sense of what is the state of the art in AI. Beyond learning the algorithms though, in this class,
you also learn all the important practical tips and tricks for making them perform well, and you get to implement them and see how they work for yourself.
So why is machine learning so widely used today? Machine learning had grown up as a subfield of AI or artificial intelligence.
We wanted to build intelligent machines, and it turns out that there are a few big of things that we could program a machine to do,
such as how to find the shortest path from A to B, like in your GPS. But for the most part,
we just did not know how to write an explicit program to do many of the more interesting things, such as perform web search,
recognize human speech, diagnose diseases from X-rays, or build a self-driving car. The only way we knew how to do these things was to have a machine learn to do it by itself.
For me, when I found it and was leading the Google Brain team, I worked on problems like speech recognition, computer vision for Google Maps review images, and advertising.
Or leading AI by two, I worked on everything from AI for augmented reality to combating payment forward to leading a self-driving car team. Most recently,
at Lending AI, AI find an established university up and gain to work on AI applications and manufacturing, large-scale agriculture, healthcare e-commerce,
and other problems. Today, there are hundreds of thousands, perhaps millions of people working on machine learning applications who could tell you still more stories about their work with machine learning.
When you've learned these skills, I hope that you too will find it great fun to dabble in exciting different applications and maybe even different industries. In fact
, I find it hard to think of any industry that machine learning is unlikely to touch in a significant way now and in the near future.
I mean, even further into the future, many people, including me, are excited about the AI dream of someday building machines as intelligence as you or me.
This is sometimes called artificial general intelligence or AI. I think AI has been over height and was still a long way away from that goal.
I don't know if it'll take 50 years or 500 years or longer to get there, but most AI researchers believe that the best way to get closer to what that goal is by using learning algorithms,
maybe once that takes some inspiration from how the human brain works. You also hear a little more about this quest for AGI later in this course. According to a study by McKinsey,
AI and machine learning is estimated to create an additional 13 trillion US dollars of value annually by the year 2013. Even though machine learning is already creating tremendous amounts of value in the software industry,
I think there could be even vastly greater value that is yet to be created outside the software industry, in sectors such as retail, travel, transportation, automotive, materials, manufacturing and so on. Because of the massive untouched opportunities across so many different sectors,
today there is a vast, unfulfilled demand for this skill set. That's why this is such a great time to be learning about machine learning. If you find machine learning applications exciting, I hope you stick with me through this course.
I can almost guarantee that you find mastering these skills worthwhile. In the next video, we'll look at a more formal definition of what is machine learning.
And we'll begin to talk about the main types of machine learning problems and algorithms. You pick up some of the main machine learning terminology and start to get a sense of what are the different algorithms and when each one might be appropriate.
So let's go on to the next video"""
final_summary = recursive_summarize(ARTICLE)
print("\n%%%%%%%%%%%%%%%%%%%%%\n")
print("Final summary:", final_summary)
>>>Final summary: machine learning is the state of the art in artificial intelligence. In this class, you learn about the most important machine learning algorithms and practical tips and tricks for implementing them. we just did not know how to write an explicit program to do many of the more interesting things, such as perform web search, recognize human speech, diagnose diseases from X-rays, or build a self-driving car. The only way we knew how to do these things was to have a machine learn to do it by itself. machine learning can be applied to a wide range of industries and applications. I find it hard to think of any industry that machine learning is unlikely to touch in a significant way now and in the near future. Many people, including me, are excited about the AI dream of someday building machines as intelligence as you or me. AI and machine learning is estimated to create an additional 13 trillion US dollars of value annually by the year 2013. Because of the massive untouched opportunities across so many different sectors, today there is a vast, unfulfilled demand for this skill set. machine learning is an important skill that can be taught in a number of ways. In the next video, we'll look at a more formal definition of what is machine learning. And we'll begin to talk about the main types of machine learning problems and algorithms
| {"language": ["en"], "license": "mit", "datasets": ["scientific_papers"], "pipeline_tag": "summarization"} | MariamMounnir/Bart_SP | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:scientific_papers",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:51:58+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #bart #text2text-generation #summarization #en #dataset-scientific_papers #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
| # BART (large-sized model), fine-tuned on scientific_papers
BART Lecture Summarization is a model fine-tuned to summarize lectures, utilizing a dataset of scientific papers due to its similarity in content structure to lectures.
The model employs a custom summarization function tailored specifically for lecture content.
## Intended uses & limitations
The primary use case for the BART Lecture Summarization model is to condense lecture content into concise summaries.
It is designed to assist students, educators, and researchers in extracting key information from lectures for study, reference, or review purposes.
### How to use
Here is how to use this model: I added this function to avoid making the summarization too brief.
'''python
import numpy as np
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("MariamMounnir/Bart_SP")
tokenizer = BartTokenizer.from_pretrained("MariamMounnir/Bart_SP")
def summarize(text, maxSummarylength=500):
# Encode the text and summarize
inputs = URL("summarize: " +
text,
return_tensors="pt",
max_length=1024, truncation=True)
summary_ids = model.generate(inputs, max_length=maxSummarylength,
min_length=int(maxSummarylength/5),
length_penalty=10.0,
num_beams=4, early_stopping=True)
summary = URL(summary_ids[0], skip_special_tokens=True)
return summary
def split_text_into_pieces(text,
max_tokens=900,
overlapPercent=10):
# Tokenize the text
tokens = tokenizer.tokenize(text)
# Calculate the overlap in tokens
overlap_tokens = int(max_tokens * overlapPercent / 100)
# Split the tokens into chunks of size
# max_tokens with overlap
pieces = [tokens[i:i + max_tokens]
for i in range(0, len(tokens),
max_tokens - overlap_tokens)]
# Convert the token pieces back into text
text_pieces = [URL(
tokenizer.convert_tokens_to_ids(piece),
skip_special_tokens=True) for piece in pieces]
return text_pieces
def recursive_summarize(text, max_length=200):
tokens = tokenizer.tokenize(text)
expectedCountOfChunks = len(tokens)/max_length
max_length=int(len(tokens)/expectedCountOfChunks)+2
# Break the text into pieces of max_length
pieces = split_text_into_pieces(text, max_tokens=max_length)
print("Number of pieces: ", len(pieces))
# Summarize each piece
summaries=[]
k=0
for k in range(0, len(pieces)):
piece=pieces[k]
print("")
print("Piece:",(k+1)," out of ", len(pieces), "pieces")
print(piece, "\n")
summary =summarize(piece, maxSummarylength=max_length/3*2)
URL(summary)
print("SUMNMARY: ", summary)
print("")
concatenated_summary = ' '.join(summaries)
return concatenated_summary
ARTICLE = """In this class, you learn about the state of the art and also practice implementing machine learning algorithms yourself.
You learn about the most important machine learning algorithms, some of which are exactly what's being used in large AI or large tech
companies today, and you get a sense of what is the state of the art in AI. Beyond learning the algorithms though, in this class,
you also learn all the important practical tips and tricks for making them perform well, and you get to implement them and see how they work for yourself.
So why is machine learning so widely used today? Machine learning had grown up as a subfield of AI or artificial intelligence.
We wanted to build intelligent machines, and it turns out that there are a few big of things that we could program a machine to do,
such as how to find the shortest path from A to B, like in your GPS. But for the most part,
we just did not know how to write an explicit program to do many of the more interesting things, such as perform web search,
recognize human speech, diagnose diseases from X-rays, or build a self-driving car. The only way we knew how to do these things was to have a machine learn to do it by itself.
For me, when I found it and was leading the Google Brain team, I worked on problems like speech recognition, computer vision for Google Maps review images, and advertising.
Or leading AI by two, I worked on everything from AI for augmented reality to combating payment forward to leading a self-driving car team. Most recently,
at Lending AI, AI find an established university up and gain to work on AI applications and manufacturing, large-scale agriculture, healthcare e-commerce,
and other problems. Today, there are hundreds of thousands, perhaps millions of people working on machine learning applications who could tell you still more stories about their work with machine learning.
When you've learned these skills, I hope that you too will find it great fun to dabble in exciting different applications and maybe even different industries. In fact
, I find it hard to think of any industry that machine learning is unlikely to touch in a significant way now and in the near future.
I mean, even further into the future, many people, including me, are excited about the AI dream of someday building machines as intelligence as you or me.
This is sometimes called artificial general intelligence or AI. I think AI has been over height and was still a long way away from that goal.
I don't know if it'll take 50 years or 500 years or longer to get there, but most AI researchers believe that the best way to get closer to what that goal is by using learning algorithms,
maybe once that takes some inspiration from how the human brain works. You also hear a little more about this quest for AGI later in this course. According to a study by McKinsey,
AI and machine learning is estimated to create an additional 13 trillion US dollars of value annually by the year 2013. Even though machine learning is already creating tremendous amounts of value in the software industry,
I think there could be even vastly greater value that is yet to be created outside the software industry, in sectors such as retail, travel, transportation, automotive, materials, manufacturing and so on. Because of the massive untouched opportunities across so many different sectors,
today there is a vast, unfulfilled demand for this skill set. That's why this is such a great time to be learning about machine learning. If you find machine learning applications exciting, I hope you stick with me through this course.
I can almost guarantee that you find mastering these skills worthwhile. In the next video, we'll look at a more formal definition of what is machine learning.
And we'll begin to talk about the main types of machine learning problems and algorithms. You pick up some of the main machine learning terminology and start to get a sense of what are the different algorithms and when each one might be appropriate.
So let's go on to the next video"""
final_summary = recursive_summarize(ARTICLE)
print("\n%%%%%%%%%%%%%%%%%%%%%\n")
print("Final summary:", final_summary)
>>>Final summary: machine learning is the state of the art in artificial intelligence. In this class, you learn about the most important machine learning algorithms and practical tips and tricks for implementing them. we just did not know how to write an explicit program to do many of the more interesting things, such as perform web search, recognize human speech, diagnose diseases from X-rays, or build a self-driving car. The only way we knew how to do these things was to have a machine learn to do it by itself. machine learning can be applied to a wide range of industries and applications. I find it hard to think of any industry that machine learning is unlikely to touch in a significant way now and in the near future. Many people, including me, are excited about the AI dream of someday building machines as intelligence as you or me. AI and machine learning is estimated to create an additional 13 trillion US dollars of value annually by the year 2013. Because of the massive untouched opportunities across so many different sectors, today there is a vast, unfulfilled demand for this skill set. machine learning is an important skill that can be taught in a number of ways. In the next video, we'll look at a more formal definition of what is machine learning. And we'll begin to talk about the main types of machine learning problems and algorithms
| [
"# BART (large-sized model), fine-tuned on scientific_papers\n\nBART Lecture Summarization is a model fine-tuned to summarize lectures, utilizing a dataset of scientific papers due to its similarity in content structure to lectures.\nThe model employs a custom summarization function tailored specifically for lecture content.",
"## Intended uses & limitations\n\nThe primary use case for the BART Lecture Summarization model is to condense lecture content into concise summaries.\nIt is designed to assist students, educators, and researchers in extracting key information from lectures for study, reference, or review purposes.",
"### How to use\n\nHere is how to use this model: I added this function to avoid making the summarization too brief.\n\n\n'''python\nimport numpy as np\nfrom transformers import BartForConditionalGeneration, BartTokenizer\n\nmodel = BartForConditionalGeneration.from_pretrained(\"MariamMounnir/Bart_SP\")\ntokenizer = BartTokenizer.from_pretrained(\"MariamMounnir/Bart_SP\")\n\ndef summarize(text, maxSummarylength=500):\n # Encode the text and summarize\n inputs = URL(\"summarize: \" +\n text,\n return_tensors=\"pt\",\n max_length=1024, truncation=True)\n summary_ids = model.generate(inputs, max_length=maxSummarylength,\n min_length=int(maxSummarylength/5),\n length_penalty=10.0,\n num_beams=4, early_stopping=True)\n summary = URL(summary_ids[0], skip_special_tokens=True)\n return summary\n\ndef split_text_into_pieces(text,\n max_tokens=900,\n overlapPercent=10):\n # Tokenize the text\n tokens = tokenizer.tokenize(text)\n\n # Calculate the overlap in tokens\n overlap_tokens = int(max_tokens * overlapPercent / 100)\n\n # Split the tokens into chunks of size\n # max_tokens with overlap\n pieces = [tokens[i:i + max_tokens]\n for i in range(0, len(tokens),\n max_tokens - overlap_tokens)]\n\n # Convert the token pieces back into text\n text_pieces = [URL(\n tokenizer.convert_tokens_to_ids(piece),\n skip_special_tokens=True) for piece in pieces]\n\n return text_pieces\n\n\ndef recursive_summarize(text, max_length=200):\n\n tokens = tokenizer.tokenize(text)\n expectedCountOfChunks = len(tokens)/max_length\n max_length=int(len(tokens)/expectedCountOfChunks)+2\n\n # Break the text into pieces of max_length\n pieces = split_text_into_pieces(text, max_tokens=max_length)\n\n print(\"Number of pieces: \", len(pieces))\n # Summarize each piece\n summaries=[]\n k=0\n for k in range(0, len(pieces)):\n piece=pieces[k]\n print(\"\")\n print(\"Piece:\",(k+1),\" out of \", len(pieces), \"pieces\")\n print(piece, \"\\n\")\n summary =summarize(piece, maxSummarylength=max_length/3*2)\n URL(summary)\n print(\"SUMNMARY: \", summary)\n print(\"\")\n\n concatenated_summary = ' '.join(summaries)\n\n\n return concatenated_summary\n\nARTICLE = \"\"\"In this class, you learn about the state of the art and also practice implementing machine learning algorithms yourself.\n You learn about the most important machine learning algorithms, some of which are exactly what's being used in large AI or large tech\ncompanies today, and you get a sense of what is the state of the art in AI. Beyond learning the algorithms though, in this class,\n you also learn all the important practical tips and tricks for making them perform well, and you get to implement them and see how they work for yourself.\n So why is machine learning so widely used today? Machine learning had grown up as a subfield of AI or artificial intelligence.\n We wanted to build intelligent machines, and it turns out that there are a few big of things that we could program a machine to do,\n such as how to find the shortest path from A to B, like in your GPS. But for the most part,\n we just did not know how to write an explicit program to do many of the more interesting things, such as perform web search,\nrecognize human speech, diagnose diseases from X-rays, or build a self-driving car. The only way we knew how to do these things was to have a machine learn to do it by itself.\n For me, when I found it and was leading the Google Brain team, I worked on problems like speech recognition, computer vision for Google Maps review images, and advertising.\nOr leading AI by two, I worked on everything from AI for augmented reality to combating payment forward to leading a self-driving car team. Most recently,\n at Lending AI, AI find an established university up and gain to work on AI applications and manufacturing, large-scale agriculture, healthcare e-commerce,\n and other problems. Today, there are hundreds of thousands, perhaps millions of people working on machine learning applications who could tell you still more stories about their work with machine learning.\n When you've learned these skills, I hope that you too will find it great fun to dabble in exciting different applications and maybe even different industries. In fact\n, I find it hard to think of any industry that machine learning is unlikely to touch in a significant way now and in the near future.\n I mean, even further into the future, many people, including me, are excited about the AI dream of someday building machines as intelligence as you or me.\n This is sometimes called artificial general intelligence or AI. I think AI has been over height and was still a long way away from that goal.\nI don't know if it'll take 50 years or 500 years or longer to get there, but most AI researchers believe that the best way to get closer to what that goal is by using learning algorithms,\nmaybe once that takes some inspiration from how the human brain works. You also hear a little more about this quest for AGI later in this course. According to a study by McKinsey,\n AI and machine learning is estimated to create an additional 13 trillion US dollars of value annually by the year 2013. Even though machine learning is already creating tremendous amounts of value in the software industry,\n I think there could be even vastly greater value that is yet to be created outside the software industry, in sectors such as retail, travel, transportation, automotive, materials, manufacturing and so on. Because of the massive untouched opportunities across so many different sectors,\n today there is a vast, unfulfilled demand for this skill set. That's why this is such a great time to be learning about machine learning. If you find machine learning applications exciting, I hope you stick with me through this course.\n I can almost guarantee that you find mastering these skills worthwhile. In the next video, we'll look at a more formal definition of what is machine learning.\n And we'll begin to talk about the main types of machine learning problems and algorithms. You pick up some of the main machine learning terminology and start to get a sense of what are the different algorithms and when each one might be appropriate.\n So let's go on to the next video\"\"\"\n\n\nfinal_summary = recursive_summarize(ARTICLE)\nprint(\"\\n%%%%%%%%%%%%%%%%%%%%%\\n\")\nprint(\"Final summary:\", final_summary)\n>>>Final summary: machine learning is the state of the art in artificial intelligence. In this class, you learn about the most important machine learning algorithms and practical tips and tricks for implementing them. we just did not know how to write an explicit program to do many of the more interesting things, such as perform web search, recognize human speech, diagnose diseases from X-rays, or build a self-driving car. The only way we knew how to do these things was to have a machine learn to do it by itself. machine learning can be applied to a wide range of industries and applications. I find it hard to think of any industry that machine learning is unlikely to touch in a significant way now and in the near future. Many people, including me, are excited about the AI dream of someday building machines as intelligence as you or me. AI and machine learning is estimated to create an additional 13 trillion US dollars of value annually by the year 2013. Because of the massive untouched opportunities across so many different sectors, today there is a vast, unfulfilled demand for this skill set. machine learning is an important skill that can be taught in a number of ways. In the next video, we'll look at a more formal definition of what is machine learning. And we'll begin to talk about the main types of machine learning problems and algorithms"
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #summarization #en #dataset-scientific_papers #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# BART (large-sized model), fine-tuned on scientific_papers\n\nBART Lecture Summarization is a model fine-tuned to summarize lectures, utilizing a dataset of scientific papers due to its similarity in content structure to lectures.\nThe model employs a custom summarization function tailored specifically for lecture content.",
"## Intended uses & limitations\n\nThe primary use case for the BART Lecture Summarization model is to condense lecture content into concise summaries.\nIt is designed to assist students, educators, and researchers in extracting key information from lectures for study, reference, or review purposes.",
"### How to use\n\nHere is how to use this model: I added this function to avoid making the summarization too brief.\n\n\n'''python\nimport numpy as np\nfrom transformers import BartForConditionalGeneration, BartTokenizer\n\nmodel = BartForConditionalGeneration.from_pretrained(\"MariamMounnir/Bart_SP\")\ntokenizer = BartTokenizer.from_pretrained(\"MariamMounnir/Bart_SP\")\n\ndef summarize(text, maxSummarylength=500):\n # Encode the text and summarize\n inputs = URL(\"summarize: \" +\n text,\n return_tensors=\"pt\",\n max_length=1024, truncation=True)\n summary_ids = model.generate(inputs, max_length=maxSummarylength,\n min_length=int(maxSummarylength/5),\n length_penalty=10.0,\n num_beams=4, early_stopping=True)\n summary = URL(summary_ids[0], skip_special_tokens=True)\n return summary\n\ndef split_text_into_pieces(text,\n max_tokens=900,\n overlapPercent=10):\n # Tokenize the text\n tokens = tokenizer.tokenize(text)\n\n # Calculate the overlap in tokens\n overlap_tokens = int(max_tokens * overlapPercent / 100)\n\n # Split the tokens into chunks of size\n # max_tokens with overlap\n pieces = [tokens[i:i + max_tokens]\n for i in range(0, len(tokens),\n max_tokens - overlap_tokens)]\n\n # Convert the token pieces back into text\n text_pieces = [URL(\n tokenizer.convert_tokens_to_ids(piece),\n skip_special_tokens=True) for piece in pieces]\n\n return text_pieces\n\n\ndef recursive_summarize(text, max_length=200):\n\n tokens = tokenizer.tokenize(text)\n expectedCountOfChunks = len(tokens)/max_length\n max_length=int(len(tokens)/expectedCountOfChunks)+2\n\n # Break the text into pieces of max_length\n pieces = split_text_into_pieces(text, max_tokens=max_length)\n\n print(\"Number of pieces: \", len(pieces))\n # Summarize each piece\n summaries=[]\n k=0\n for k in range(0, len(pieces)):\n piece=pieces[k]\n print(\"\")\n print(\"Piece:\",(k+1),\" out of \", len(pieces), \"pieces\")\n print(piece, \"\\n\")\n summary =summarize(piece, maxSummarylength=max_length/3*2)\n URL(summary)\n print(\"SUMNMARY: \", summary)\n print(\"\")\n\n concatenated_summary = ' '.join(summaries)\n\n\n return concatenated_summary\n\nARTICLE = \"\"\"In this class, you learn about the state of the art and also practice implementing machine learning algorithms yourself.\n You learn about the most important machine learning algorithms, some of which are exactly what's being used in large AI or large tech\ncompanies today, and you get a sense of what is the state of the art in AI. Beyond learning the algorithms though, in this class,\n you also learn all the important practical tips and tricks for making them perform well, and you get to implement them and see how they work for yourself.\n So why is machine learning so widely used today? Machine learning had grown up as a subfield of AI or artificial intelligence.\n We wanted to build intelligent machines, and it turns out that there are a few big of things that we could program a machine to do,\n such as how to find the shortest path from A to B, like in your GPS. But for the most part,\n we just did not know how to write an explicit program to do many of the more interesting things, such as perform web search,\nrecognize human speech, diagnose diseases from X-rays, or build a self-driving car. The only way we knew how to do these things was to have a machine learn to do it by itself.\n For me, when I found it and was leading the Google Brain team, I worked on problems like speech recognition, computer vision for Google Maps review images, and advertising.\nOr leading AI by two, I worked on everything from AI for augmented reality to combating payment forward to leading a self-driving car team. Most recently,\n at Lending AI, AI find an established university up and gain to work on AI applications and manufacturing, large-scale agriculture, healthcare e-commerce,\n and other problems. Today, there are hundreds of thousands, perhaps millions of people working on machine learning applications who could tell you still more stories about their work with machine learning.\n When you've learned these skills, I hope that you too will find it great fun to dabble in exciting different applications and maybe even different industries. In fact\n, I find it hard to think of any industry that machine learning is unlikely to touch in a significant way now and in the near future.\n I mean, even further into the future, many people, including me, are excited about the AI dream of someday building machines as intelligence as you or me.\n This is sometimes called artificial general intelligence or AI. I think AI has been over height and was still a long way away from that goal.\nI don't know if it'll take 50 years or 500 years or longer to get there, but most AI researchers believe that the best way to get closer to what that goal is by using learning algorithms,\nmaybe once that takes some inspiration from how the human brain works. You also hear a little more about this quest for AGI later in this course. According to a study by McKinsey,\n AI and machine learning is estimated to create an additional 13 trillion US dollars of value annually by the year 2013. Even though machine learning is already creating tremendous amounts of value in the software industry,\n I think there could be even vastly greater value that is yet to be created outside the software industry, in sectors such as retail, travel, transportation, automotive, materials, manufacturing and so on. Because of the massive untouched opportunities across so many different sectors,\n today there is a vast, unfulfilled demand for this skill set. That's why this is such a great time to be learning about machine learning. If you find machine learning applications exciting, I hope you stick with me through this course.\n I can almost guarantee that you find mastering these skills worthwhile. In the next video, we'll look at a more formal definition of what is machine learning.\n And we'll begin to talk about the main types of machine learning problems and algorithms. You pick up some of the main machine learning terminology and start to get a sense of what are the different algorithms and when each one might be appropriate.\n So let's go on to the next video\"\"\"\n\n\nfinal_summary = recursive_summarize(ARTICLE)\nprint(\"\\n%%%%%%%%%%%%%%%%%%%%%\\n\")\nprint(\"Final summary:\", final_summary)\n>>>Final summary: machine learning is the state of the art in artificial intelligence. In this class, you learn about the most important machine learning algorithms and practical tips and tricks for implementing them. we just did not know how to write an explicit program to do many of the more interesting things, such as perform web search, recognize human speech, diagnose diseases from X-rays, or build a self-driving car. The only way we knew how to do these things was to have a machine learn to do it by itself. machine learning can be applied to a wide range of industries and applications. I find it hard to think of any industry that machine learning is unlikely to touch in a significant way now and in the near future. Many people, including me, are excited about the AI dream of someday building machines as intelligence as you or me. AI and machine learning is estimated to create an additional 13 trillion US dollars of value annually by the year 2013. Because of the massive untouched opportunities across so many different sectors, today there is a vast, unfulfilled demand for this skill set. machine learning is an important skill that can be taught in a number of ways. In the next video, we'll look at a more formal definition of what is machine learning. And we'll begin to talk about the main types of machine learning problems and algorithms"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DoubelT/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0094
- Validation Loss: 0.0004
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 89496, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3246 | 0.1471 | 0 |
| 0.0799 | 0.0075 | 1 |
| 0.0190 | 0.0004 | 2 |
| 0.0094 | 0.0004 | 3 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.16.1
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google/mt5-small", "model-index": [{"name": "DoubelT/mt5-small-finetuned-amazon-en-es", "results": []}]} | DoubelT/mt5-small-finetuned-amazon-en-es | null | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:53:02+00:00 | [] | [] | TAGS
#transformers #tf #mt5 #text2text-generation #generated_from_keras_callback #base_model-google/mt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| DoubelT/mt5-small-finetuned-amazon-en-es
========================================
This model is a fine-tuned version of google/mt5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.0094
* Validation Loss: 0.0004
* Epoch: 3
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 5.6e-05, 'decay\_steps': 89496, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: mixed\_float16
### Training results
### Framework versions
* Transformers 4.40.1
* TensorFlow 2.16.1
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5.6e-05, 'decay\\_steps': 89496, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.16.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #mt5 #text2text-generation #generated_from_keras_callback #base_model-google/mt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5.6e-05, 'decay\\_steps': 89496, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.16.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: google/gemma-2b
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: Harsh1729/hotpotqa_uncertain
type: alpaca
split: train
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./hotpotqa_uncertain-qlora-out
hub_model_id: Harsh1729/gemma2b-hotpotqa_uncertain-v1
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.02
evals_per_epoch: 1
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: # deepspeed_configs/zero2.json # multi-gpu only
weight_decay: 0.1
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 0.00000001
max_grad_norm: 1.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# gemma2b-hotpotqa_uncertain-v1
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 59
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0391 | 1.0 | 3675 | 0.3151 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.0 | {"license": "gemma", "library_name": "peft", "tags": ["axolotl", "generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemma2b-hotpotqa_uncertain-v1", "results": []}]} | Harsh1729/gemma2b-hotpotqa_uncertain-v1 | null | [
"peft",
"tensorboard",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"4-bit",
"region:us"
] | null | 2024-04-16T19:53:20+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #gemma #axolotl #generated_from_trainer #base_model-google/gemma-2b #license-gemma #4-bit #region-us
| <img src="URL alt="Built with Axolotl" width="200" height="32"/>
See axolotl config
axolotl version: '0.4.0'
gemma2b-hotpotqa\_uncertain-v1
==============================
This model is a fine-tuned version of google/gemma-2b on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3151
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 59
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0.dev0
* Pytorch 2.2.1
* Datasets 2.18.0
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 59\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#peft #tensorboard #safetensors #gemma #axolotl #generated_from_trainer #base_model-google/gemma-2b #license-gemma #4-bit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 59\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-facebook-opt350m-with-own-piidata
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the pii_ner_instruction_fine_tuning dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "facebook/opt-350m", "model-index": [{"name": "sft-facebook-opt350m-with-own-piidata", "results": []}]} | acram/sft-facebook-opt350m-with-own-piidata | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"license:other",
"region:us"
] | null | 2024-04-16T19:53:49+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-facebook/opt-350m #license-other #region-us
|
# sft-facebook-opt350m-with-own-piidata
This model is a fine-tuned version of facebook/opt-350m on the pii_ner_instruction_fine_tuning dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0 | [
"# sft-facebook-opt350m-with-own-piidata\n\nThis model is a fine-tuned version of facebook/opt-350m on the pii_ner_instruction_fine_tuning dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.41e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.36.2\n- Pytorch 2.0.0\n- Datasets 2.15.0\n- Tokenizers 0.15.0"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-facebook/opt-350m #license-other #region-us \n",
"# sft-facebook-opt350m-with-own-piidata\n\nThis model is a fine-tuned version of facebook/opt-350m on the pii_ner_instruction_fine_tuning dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.41e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.36.2\n- Pytorch 2.0.0\n- Datasets 2.15.0\n- Tokenizers 0.15.0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | showvikdbz/Enlighten_Instruct | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-16T19:54:47+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ConvNeXT_AI_image_detector
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0518
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2164 | 1.0 | 1093 | 0.1476 | 0.9444 |
| 0.1647 | 2.0 | 2187 | 0.0872 | 0.9673 |
| 0.1784 | 3.0 | 3281 | 0.0761 | 0.9709 |
| 0.1247 | 4.0 | 4375 | 0.0869 | 0.9666 |
| 0.0994 | 5.0 | 5468 | 0.0808 | 0.9696 |
| 0.073 | 6.0 | 6562 | 0.0514 | 0.9820 |
| 0.0924 | 7.0 | 7656 | 0.0526 | 0.9809 |
| 0.0779 | 8.0 | 8750 | 0.0518 | 0.9819 |
| 0.0903 | 9.0 | 9843 | 0.0518 | 0.9826 |
| 0.0721 | 9.99 | 10930 | 0.0512 | 0.9817 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "ConvNeXT_AI_image_detector", "results": []}]} | mmanikanta/ConvNeXT_AI_image_detector | null | [
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:55:48+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #convnext #image-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ConvNeXT\_AI\_image\_detector
=============================
This model is a fine-tuned version of facebook/convnext-tiny-224 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0518
* Accuracy: 0.9826
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.30.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #convnext #image-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] |
text-generation | transformers |
# Gophos - Sophos Log Interpreter - Gemma 2B-IT Fine-tuned Model
## Overview
This repository contains a fine-tuned version of the Gemma 2B-IT model, tailored specifically for interpreting Sophos logs exported from Splunk. The model is hosted on Hugging Face for easy integration and usage in various applications requiring interpretation and analysis of Sophos logs.
## Model Description
The Gemma 2B-IT model, has been fine-tuned using a dataset of Sophos logs extracted from Splunk. Through this fine-tuning process, the model has been optimized to effectively interpret and extract meaningful information from Sophos logs, facilitating tasks such as threat detection, security analysis, and incident response.
## Usage
To utilize the model, simply install the Hugging Face `transformers` library and load the model using its unique identifier or name:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Load the fine-tuned Gemma 2B-IT model
model = AutoModelForSequenceClassification.from_pretrained("SadokBarbouche/gophos")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("SadokBarbouche/gophos")
```
## Data Preparation
The fine-tuning of the Gemma 2B-IT model was conducted using a dataset of Sophos logs exported from Splunk. The dataset was preprocessed to ensure compatibility with the model architecture and to optimize training performance.
## Acknowledgements
We would like to acknowledge the creators of the Gemma 2B-IT model for their pioneering work in natural language understanding. Additionally, we extend our gratitude to the contributors of the Hugging Face `transformers` library for their valuable tools and resources. | {} | SadokBarbouche/gophos | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T19:56:01+00:00 | [] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Gophos - Sophos Log Interpreter - Gemma 2B-IT Fine-tuned Model
## Overview
This repository contains a fine-tuned version of the Gemma 2B-IT model, tailored specifically for interpreting Sophos logs exported from Splunk. The model is hosted on Hugging Face for easy integration and usage in various applications requiring interpretation and analysis of Sophos logs.
## Model Description
The Gemma 2B-IT model, has been fine-tuned using a dataset of Sophos logs extracted from Splunk. Through this fine-tuning process, the model has been optimized to effectively interpret and extract meaningful information from Sophos logs, facilitating tasks such as threat detection, security analysis, and incident response.
## Usage
To utilize the model, simply install the Hugging Face 'transformers' library and load the model using its unique identifier or name:
## Data Preparation
The fine-tuning of the Gemma 2B-IT model was conducted using a dataset of Sophos logs exported from Splunk. The dataset was preprocessed to ensure compatibility with the model architecture and to optimize training performance.
## Acknowledgements
We would like to acknowledge the creators of the Gemma 2B-IT model for their pioneering work in natural language understanding. Additionally, we extend our gratitude to the contributors of the Hugging Face 'transformers' library for their valuable tools and resources. | [
"# Gophos - Sophos Log Interpreter - Gemma 2B-IT Fine-tuned Model",
"## Overview\nThis repository contains a fine-tuned version of the Gemma 2B-IT model, tailored specifically for interpreting Sophos logs exported from Splunk. The model is hosted on Hugging Face for easy integration and usage in various applications requiring interpretation and analysis of Sophos logs.",
"## Model Description\nThe Gemma 2B-IT model, has been fine-tuned using a dataset of Sophos logs extracted from Splunk. Through this fine-tuning process, the model has been optimized to effectively interpret and extract meaningful information from Sophos logs, facilitating tasks such as threat detection, security analysis, and incident response.",
"## Usage\nTo utilize the model, simply install the Hugging Face 'transformers' library and load the model using its unique identifier or name:",
"## Data Preparation\nThe fine-tuning of the Gemma 2B-IT model was conducted using a dataset of Sophos logs exported from Splunk. The dataset was preprocessed to ensure compatibility with the model architecture and to optimize training performance.",
"## Acknowledgements\nWe would like to acknowledge the creators of the Gemma 2B-IT model for their pioneering work in natural language understanding. Additionally, we extend our gratitude to the contributors of the Hugging Face 'transformers' library for their valuable tools and resources."
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Gophos - Sophos Log Interpreter - Gemma 2B-IT Fine-tuned Model",
"## Overview\nThis repository contains a fine-tuned version of the Gemma 2B-IT model, tailored specifically for interpreting Sophos logs exported from Splunk. The model is hosted on Hugging Face for easy integration and usage in various applications requiring interpretation and analysis of Sophos logs.",
"## Model Description\nThe Gemma 2B-IT model, has been fine-tuned using a dataset of Sophos logs extracted from Splunk. Through this fine-tuning process, the model has been optimized to effectively interpret and extract meaningful information from Sophos logs, facilitating tasks such as threat detection, security analysis, and incident response.",
"## Usage\nTo utilize the model, simply install the Hugging Face 'transformers' library and load the model using its unique identifier or name:",
"## Data Preparation\nThe fine-tuning of the Gemma 2B-IT model was conducted using a dataset of Sophos logs exported from Splunk. The dataset was preprocessed to ensure compatibility with the model architecture and to optimize training performance.",
"## Acknowledgements\nWe would like to acknowledge the creators of the Gemma 2B-IT model for their pioneering work in natural language understanding. Additionally, we extend our gratitude to the contributors of the Hugging Face 'transformers' library for their valuable tools and resources."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.02-len_3-filtered-negative
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.02-len_3-filtered-negative", "results": []}]} | Shalazary/ruBert-base-sberquad-0.02-len_3-filtered-negative | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T19:56:28+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.02-len_3-filtered-negative
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# ruBert-base-sberquad-0.02-len_3-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.02-len_3-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8276
- F1 Score: 0.5843
- Accuracy: 0.5853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6765 | 14.29 | 200 | 0.6741 | 0.5763 | 0.5812 |
| 0.6334 | 28.57 | 400 | 0.6987 | 0.5770 | 0.5765 |
| 0.6131 | 42.86 | 600 | 0.7038 | 0.5764 | 0.5765 |
| 0.5947 | 57.14 | 800 | 0.7300 | 0.5749 | 0.5786 |
| 0.5779 | 71.43 | 1000 | 0.7299 | 0.5743 | 0.5742 |
| 0.5657 | 85.71 | 1200 | 0.7356 | 0.5706 | 0.5716 |
| 0.5564 | 100.0 | 1400 | 0.7324 | 0.5765 | 0.5762 |
| 0.5496 | 114.29 | 1600 | 0.7350 | 0.5813 | 0.5815 |
| 0.5416 | 128.57 | 1800 | 0.7579 | 0.5756 | 0.5757 |
| 0.537 | 142.86 | 2000 | 0.7476 | 0.5788 | 0.5783 |
| 0.5314 | 157.14 | 2200 | 0.7639 | 0.5824 | 0.5821 |
| 0.5266 | 171.43 | 2400 | 0.7703 | 0.5764 | 0.5760 |
| 0.5193 | 185.71 | 2600 | 0.7378 | 0.5832 | 0.5842 |
| 0.5152 | 200.0 | 2800 | 0.7559 | 0.5835 | 0.5859 |
| 0.5099 | 214.29 | 3000 | 0.7974 | 0.5852 | 0.5850 |
| 0.5041 | 228.57 | 3200 | 0.7759 | 0.5846 | 0.5848 |
| 0.4959 | 242.86 | 3400 | 0.7827 | 0.5890 | 0.5886 |
| 0.4904 | 257.14 | 3600 | 0.7937 | 0.5888 | 0.5883 |
| 0.4836 | 271.43 | 3800 | 0.7992 | 0.5863 | 0.5868 |
| 0.4756 | 285.71 | 4000 | 0.7843 | 0.5876 | 0.5874 |
| 0.4703 | 300.0 | 4200 | 0.8115 | 0.5905 | 0.5900 |
| 0.4648 | 314.29 | 4400 | 0.8193 | 0.5913 | 0.5909 |
| 0.4608 | 328.57 | 4600 | 0.8320 | 0.5915 | 0.5933 |
| 0.4536 | 342.86 | 4800 | 0.8299 | 0.5859 | 0.5862 |
| 0.4489 | 357.14 | 5000 | 0.8825 | 0.5899 | 0.5906 |
| 0.4436 | 371.43 | 5200 | 0.8388 | 0.5838 | 0.5833 |
| 0.438 | 385.71 | 5400 | 0.8174 | 0.5844 | 0.5842 |
| 0.4347 | 400.0 | 5600 | 0.8332 | 0.5831 | 0.5836 |
| 0.4283 | 414.29 | 5800 | 0.8438 | 0.5843 | 0.5839 |
| 0.4256 | 428.57 | 6000 | 0.8468 | 0.5826 | 0.5833 |
| 0.4221 | 442.86 | 6200 | 0.8583 | 0.5826 | 0.5830 |
| 0.4179 | 457.14 | 6400 | 0.8567 | 0.5793 | 0.5809 |
| 0.4152 | 471.43 | 6600 | 0.8598 | 0.5777 | 0.5783 |
| 0.4106 | 485.71 | 6800 | 0.8590 | 0.5826 | 0.5827 |
| 0.4079 | 500.0 | 7000 | 0.8878 | 0.5811 | 0.5821 |
| 0.4056 | 514.29 | 7200 | 0.8449 | 0.5807 | 0.5821 |
| 0.4022 | 528.57 | 7400 | 0.8765 | 0.5841 | 0.5839 |
| 0.4004 | 542.86 | 7600 | 0.8739 | 0.5845 | 0.5850 |
| 0.3974 | 557.14 | 7800 | 0.8986 | 0.5832 | 0.5833 |
| 0.3955 | 571.43 | 8000 | 0.8712 | 0.5835 | 0.5836 |
| 0.3926 | 585.71 | 8200 | 0.8765 | 0.5880 | 0.5880 |
| 0.3895 | 600.0 | 8400 | 0.8681 | 0.5840 | 0.5839 |
| 0.3899 | 614.29 | 8600 | 0.8929 | 0.5838 | 0.5839 |
| 0.3883 | 628.57 | 8800 | 0.8883 | 0.5877 | 0.5874 |
| 0.3857 | 642.86 | 9000 | 0.8858 | 0.5818 | 0.5821 |
| 0.3843 | 657.14 | 9200 | 0.8850 | 0.5853 | 0.5853 |
| 0.3839 | 671.43 | 9400 | 0.8833 | 0.5878 | 0.5877 |
| 0.3826 | 685.71 | 9600 | 0.8930 | 0.5892 | 0.5891 |
| 0.3818 | 700.0 | 9800 | 0.8916 | 0.5873 | 0.5871 |
| 0.3814 | 714.29 | 10000 | 0.8917 | 0.5874 | 0.5874 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T19:56:48+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H4ac-seqsight\_32768\_512\_30M-L32\_all
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8276
* F1 Score: 0.5843
* Accuracy: 0.5853
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tapt_helpfulness_unipelt_pretraining_model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 21
- eval_batch_size: 21
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.7846 | 1.0 | 232 | 2.5050 |
| 2.4019 | 2.0 | 465 | 2.0958 |
| 2.1426 | 3.0 | 697 | 1.9449 |
| 2.0218 | 4.0 | 930 | 1.8522 |
| 1.9507 | 5.0 | 1162 | 1.8076 |
| 1.8903 | 6.0 | 1395 | 1.7539 |
| 1.8596 | 7.0 | 1627 | 1.7311 |
| 1.8213 | 8.0 | 1860 | 1.7060 |
| 1.8026 | 9.0 | 2092 | 1.6792 |
| 1.7766 | 10.0 | 2325 | 1.6698 |
| 1.7661 | 11.0 | 2557 | 1.6504 |
| 1.7469 | 12.0 | 2790 | 1.6404 |
| 1.7332 | 13.0 | 3022 | 1.6280 |
| 1.72 | 14.0 | 3255 | 1.6189 |
| 1.7135 | 15.0 | 3487 | 1.6213 |
| 1.6895 | 16.0 | 3720 | 1.6018 |
| 1.6974 | 17.0 | 3952 | 1.5918 |
| 1.6777 | 18.0 | 4185 | 1.5808 |
| 1.6794 | 19.0 | 4417 | 1.5978 |
| 1.6657 | 20.0 | 4650 | 1.5771 |
| 1.6661 | 21.0 | 4882 | 1.5759 |
| 1.6508 | 22.0 | 5115 | 1.5634 |
| 1.6535 | 23.0 | 5347 | 1.5830 |
| 1.649 | 24.0 | 5580 | 1.5624 |
| 1.6454 | 25.0 | 5812 | 1.5526 |
| 1.629 | 26.0 | 6045 | 1.5491 |
| 1.6356 | 27.0 | 6277 | 1.5479 |
| 1.6186 | 28.0 | 6510 | 1.5614 |
| 1.621 | 29.0 | 6742 | 1.5604 |
| 1.6255 | 30.0 | 6975 | 1.5376 |
| 1.6178 | 31.0 | 7207 | 1.5383 |
| 1.6096 | 32.0 | 7440 | 1.5390 |
| 1.6128 | 33.0 | 7672 | 1.5304 |
| 1.6068 | 34.0 | 7905 | 1.5242 |
| 1.6073 | 35.0 | 8137 | 1.5331 |
| 1.5983 | 36.0 | 8370 | 1.5167 |
| 1.6041 | 37.0 | 8602 | 1.5245 |
| 1.5926 | 38.0 | 8835 | 1.5294 |
| 1.6024 | 39.0 | 9067 | 1.5280 |
| 1.5868 | 40.0 | 9300 | 1.5183 |
| 1.592 | 41.0 | 9532 | 1.5302 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "roberta-base", "model-index": [{"name": "tapt_helpfulness_unipelt_pretraining_model", "results": []}]} | ltuzova/tapt_helpfulness_unipelt_pretraining_model | null | [
"tensorboard",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"region:us"
] | null | 2024-04-16T19:58:13+00:00 | [] | [] | TAGS
#tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us
| tapt\_helpfulness\_unipelt\_pretraining\_model
==============================================
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5302
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 21
* eval\_batch\_size: 21
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: linear
* num\_epochs: 100
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | karsar/Gemma2B_finetune | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T19:59:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./out
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# out
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9711 | 0.8 | 1 | 1.0638 |
| 0.9758 | 1.4 | 2 | 1.0348 |
| 0.9554 | 2.2 | 3 | 0.9725 |
| 0.9379 | 2.8 | 4 | 0.9125 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "out", "results": []}]} | rachfop/mistral-v1 | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:01:00+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| <img src="URL alt="Built with Axolotl" width="200" height="32"/>
See axolotl config
axolotl version: '0.4.0'
out
===
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9125
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* total\_eval\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 10
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.40.0.dev0
* Pytorch 2.1.2+cu118
* Datasets 2.18.0
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | yzimmermann/ChemBERTa-zinc-base-v1-safetensors | null | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:01:34+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #roberta #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #roberta #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_lora_7epoch
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "facebook/wav2vec2-base-960h", "model-index": [{"name": "wav2vec2_lora_7epoch", "results": []}]} | Chijioke-Mgbahurike/wav2vec2_lora_7epoch | null | [
"peft",
"tensorboard",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T20:02:30+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #wav2vec2 #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #region-us
|
# wav2vec2_lora_7epoch
This model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# wav2vec2_lora_7epoch\n\nThis model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #wav2vec2 #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #region-us \n",
"# wav2vec2_lora_7epoch\n\nThis model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# remote_sensing_gpt_expt
This model is a fine-tuned version of [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.568 | 1.0 | 829 | 5.1888 |
| 5.0135 | 2.0 | 1658 | 5.1070 |
| 4.9725 | 3.0 | 2487 | 5.0859 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "bigscience-bloom-rail-1.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "bigscience/bloom-1b1", "model-index": [{"name": "remote_sensing_gpt_expt", "results": []}]} | gremlin97/remote_sensing_gpt_expt | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloom-1b1",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-04-16T20:02:33+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-bigscience/bloom-1b1 #license-bigscience-bloom-rail-1.0 #region-us
| remote\_sensing\_gpt\_expt
==========================
This model is a fine-tuned version of bigscience/bloom-1b1 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 5.0859
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-bigscience/bloom-1b1 #license-bigscience-bloom-rail-1.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: b0n541/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | b0n541/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null | 2024-04-16T20:02:37+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: b0n541/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: b0n541/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: b0n541/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | yzimmermann/ChemBERTa_zinc250k_v2_40k-safetensors | null | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:02:57+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #roberta #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #roberta #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | MikeMpapa/lmd_mmm_tokenizer_tutorial_artist_toy | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:03:34+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Spaetzle-v66-7b
Spaetzle-v66-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
* [cstr/Spaetzle-v53-7b](https://huggingface.co/cstr/Spaetzle-v53-7b)
* [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
## 🧩 Configuration
```yaml
models:
- model: mayflowergmbh/Wiedervereinigung-7b-dpo
# no parameters necessary for base model
- model: flemmingmiguel/NeuDist-Ro-7B
parameters:
density: 0.60
weight: 0.30
- model: cstr/Spaetzle-v53-7b
parameters:
density: 0.65
weight: 0.40
- model: ResplendentAI/Flora_DPO_7B
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: mayflowergmbh/Wiedervereinigung-7b-dpo
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v66-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "flemmingmiguel/NeuDist-Ro-7B", "cstr/Spaetzle-v53-7b", "ResplendentAI/Flora_DPO_7B"], "base_model": ["flemmingmiguel/NeuDist-Ro-7B", "cstr/Spaetzle-v53-7b", "ResplendentAI/Flora_DPO_7B"]} | cstr/Spaetzle-v66-7b | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/NeuDist-Ro-7B",
"cstr/Spaetzle-v53-7b",
"ResplendentAI/Flora_DPO_7B",
"conversational",
"base_model:flemmingmiguel/NeuDist-Ro-7B",
"base_model:cstr/Spaetzle-v53-7b",
"base_model:ResplendentAI/Flora_DPO_7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:04:56+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #flemmingmiguel/NeuDist-Ro-7B #cstr/Spaetzle-v53-7b #ResplendentAI/Flora_DPO_7B #conversational #base_model-flemmingmiguel/NeuDist-Ro-7B #base_model-cstr/Spaetzle-v53-7b #base_model-ResplendentAI/Flora_DPO_7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Spaetzle-v66-7b
Spaetzle-v66-7b is a merge of the following models using LazyMergekit:
* flemmingmiguel/NeuDist-Ro-7B
* cstr/Spaetzle-v53-7b
* ResplendentAI/Flora_DPO_7B
## Configuration
## Usage
| [
"# Spaetzle-v66-7b\n\nSpaetzle-v66-7b is a merge of the following models using LazyMergekit:\n* flemmingmiguel/NeuDist-Ro-7B\n* cstr/Spaetzle-v53-7b\n* ResplendentAI/Flora_DPO_7B",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #flemmingmiguel/NeuDist-Ro-7B #cstr/Spaetzle-v53-7b #ResplendentAI/Flora_DPO_7B #conversational #base_model-flemmingmiguel/NeuDist-Ro-7B #base_model-cstr/Spaetzle-v53-7b #base_model-ResplendentAI/Flora_DPO_7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Spaetzle-v66-7b\n\nSpaetzle-v66-7b is a merge of the following models using LazyMergekit:\n* flemmingmiguel/NeuDist-Ro-7B\n* cstr/Spaetzle-v53-7b\n* ResplendentAI/Flora_DPO_7B",
"## Configuration",
"## Usage"
] |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-birdclef
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the Birdclef 2024 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8592
- Accuracy: 0.6975
- F1 Macro: 0.4507
- F1 Weighted: 0.6871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:-----------:|
| 3.3212 | 1.0 | 2446 | 3.4889 | 0.2343 | 0.0453 | 0.1567 |
| 2.5841 | 2.0 | 4892 | 2.1989 | 0.5123 | 0.1673 | 0.4489 |
| 1.5152 | 3.0 | 7338 | 1.8349 | 0.5871 | 0.2449 | 0.5452 |
| 1.4771 | 4.0 | 9784 | 1.6815 | 0.6300 | 0.3213 | 0.6048 |
| 1.0287 | 5.0 | 12230 | 1.6218 | 0.6627 | 0.3498 | 0.6462 |
| 0.9425 | 6.0 | 14676 | 1.6177 | 0.6688 | 0.3835 | 0.6511 |
| 0.291 | 7.0 | 17122 | 1.7205 | 0.6832 | 0.3903 | 0.6682 |
| 0.244 | 8.0 | 19568 | 1.7817 | 0.6811 | 0.4049 | 0.6706 |
| 0.0593 | 9.0 | 22014 | 1.8653 | 0.6881 | 0.4282 | 0.6755 |
| 0.0754 | 10.0 | 24460 | 1.8877 | 0.6917 | 0.4320 | 0.6823 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["birdclef/hubert"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-finetuned-birdclef", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "Birdclef 2024", "type": "birdclef/hubert", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6974652493867539, "name": "Accuracy"}]}]}]} | nmks/distilhubert-finetuned-birdclef | null | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:birdclef/hubert",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-16T20:05:55+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #hubert #audio-classification #generated_from_trainer #dataset-birdclef/hubert #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| distilhubert-finetuned-birdclef
===============================
This model is a fine-tuned version of ntu-spml/distilhubert on the Birdclef 2024 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8592
* Accuracy: 0.6975
* F1 Macro: 0.4507
* F1 Weighted: 0.6871
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #hubert #audio-classification #generated_from_trainer #dataset-birdclef/hubert #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | kssumanth6/t5_small_chit_chat_generator_v2 | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:07:27+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Mixtral-8x22B-Instruct-v0.1
The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1).
## Run the model
```python
from transformers import AutoModelForCausalLM
from mistral_common.protocol.instruct.messages import (
AssistantMessage,
UserMessage,
)
from mistral_common.protocol.instruct.tool_calls import (
Tool,
Function,
)
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.normalize import ChatCompletionRequest
device = "cuda" # the device to load the model onto
tokenizer_v3 = MistralTokenizer.v3()
mistral_query = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris"),
],
model="test",
)
encodeds = tokenizer_v3.encode_chat_completion(mistral_query).tokens
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x22B-Instruct-v0.1")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
sp_tokenizer = tokenizer_v3.instruct_tokenizer.tokenizer
decoded = sp_tokenizer.decode(generated_ids[0])
print(decoded)
```
Alternatively, you can run this example with the Hugging Face tokenizer.
To use this example, you'll need transformers version 4.39.0 or higher.
```console
pip install transformers==4.39.0
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x22B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
conversation=[
{"role": "user", "content": "What's the weather like in Paris?"},
{
"role": "tool_calls",
"content": [
{
"name": "get_current_weather",
"arguments": {"location": "Paris, France", "format": "celsius"},
}
]
},
{
"role": "tool_results",
"content": {"content": 22}
},
{"role": "assistant", "content": "The current temperature in Paris, France is 22 degrees Celsius."},
{"role": "user", "content": "What about San Francisco?"}
]
tools = [{"type": "function", "function": {"name":"get_current_weather", "description": "Get▁the▁current▁weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "format": {"type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location."}},"required":["location","format"]}}}]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_chat_template(
conversation,
chat_template="tool_use",
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x22B-Instruct-v0.1")
inputs = tokenizer(tool_use_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# Instruct tokenizer
The HuggingFace tokenizer included in this release should match our own. To compare:
`pip install mistral-common`
```py
from mistral_common.protocol.instruct.messages import (
AssistantMessage,
UserMessage,
)
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.normalize import ChatCompletionRequest
from transformers import AutoTokenizer
tokenizer_v3 = MistralTokenizer.v3()
mistral_query = ChatCompletionRequest(
messages=[
UserMessage(content="How many experts ?"),
AssistantMessage(content="8"),
UserMessage(content="How big ?"),
AssistantMessage(content="22B"),
UserMessage(content="Noice 🎉 !"),
],
model="test",
)
hf_messages = mistral_query.model_dump()['messages']
tokenized_mistral = tokenizer_v3.encode_chat_completion(mistral_query).tokens
tokenizer_hf = AutoTokenizer.from_pretrained('mistralai/Mixtral-8x22B-Instruct-v0.1')
tokenized_hf = tokenizer_hf.apply_chat_template(hf_messages, tokenize=True)
assert tokenized_hf == tokenized_mistral
```
# Function calling and special tokens
This tokenizer includes more special tokens, related to function calling :
- [TOOL_CALLS]
- [AVAILABLE_TOOLS]
- [/AVAILABLE_TOOLS]
- [TOOL_RESULTS]
- [/TOOL_RESULTS]
If you want to use this model with function calling, please be sure to apply it similarly to what is done in our [SentencePieceTokenizerV3](https://github.com/mistralai/mistral-common/blob/main/src/mistral_common/tokens/tokenizers/sentencepiece.py#L299).
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,
Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,
Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,
Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,
Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,
Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,
Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,
Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,
Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,
Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,
Valera Nemychnikova, William El Sayed, William Marshall | {"language": ["en", "es", "it", "de", "fr"], "license": "apache-2.0"} | mistralai/Mixtral-8x22B-Instruct-v0.1 | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"es",
"it",
"de",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:08:01+00:00 | [] | [
"en",
"es",
"it",
"de",
"fr"
] | TAGS
#transformers #safetensors #mixtral #text-generation #conversational #en #es #it #de #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model Card for Mixtral-8x22B-Instruct-v0.1
The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.
## Run the model
Alternatively, you can run this example with the Hugging Face tokenizer.
To use this example, you'll need transformers version 4.39.0 or higher.
# Instruct tokenizer
The HuggingFace tokenizer included in this release should match our own. To compare:
'pip install mistral-common'
# Function calling and special tokens
This tokenizer includes more special tokens, related to function calling :
- [TOOL_CALLS]
- [AVAILABLE_TOOLS]
- [/AVAILABLE_TOOLS]
- [TOOL_RESULTS]
- [/TOOL_RESULTS]
If you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,
Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,
Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,
Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,
Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,
Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,
Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,
Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,
Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,
Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,
Valera Nemychnikova, William El Sayed, William Marshall | [
"# Model Card for Mixtral-8x22B-Instruct-v0.1\nThe Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.",
"## Run the model \n\nAlternatively, you can run this example with the Hugging Face tokenizer.\nTo use this example, you'll need transformers version 4.39.0 or higher.",
"# Instruct tokenizer\nThe HuggingFace tokenizer included in this release should match our own. To compare: \n'pip install mistral-common'",
"# Function calling and special tokens\nThis tokenizer includes more special tokens, related to function calling : \n- [TOOL_CALLS]\n- [AVAILABLE_TOOLS]\n- [/AVAILABLE_TOOLS]\n- [TOOL_RESULTS]\n- [/TOOL_RESULTS]\n\nIf you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3.",
"# The Mistral AI Team\nAlbert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,\nArthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,\nBlanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,\nDiego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,\nGianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,\nJean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,\nLucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,\nMarie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,\nPierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,\nThibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,\nValera Nemychnikova, William El Sayed, William Marshall"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #en #es #it #de #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for Mixtral-8x22B-Instruct-v0.1\nThe Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.",
"## Run the model \n\nAlternatively, you can run this example with the Hugging Face tokenizer.\nTo use this example, you'll need transformers version 4.39.0 or higher.",
"# Instruct tokenizer\nThe HuggingFace tokenizer included in this release should match our own. To compare: \n'pip install mistral-common'",
"# Function calling and special tokens\nThis tokenizer includes more special tokens, related to function calling : \n- [TOOL_CALLS]\n- [AVAILABLE_TOOLS]\n- [/AVAILABLE_TOOLS]\n- [TOOL_RESULTS]\n- [/TOOL_RESULTS]\n\nIf you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3.",
"# The Mistral AI Team\nAlbert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,\nArthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,\nBlanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,\nDiego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,\nGianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,\nJean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,\nLucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,\nMarie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,\nPierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,\nThibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,\nValera Nemychnikova, William El Sayed, William Marshall"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hxuanc/bert-base-uncased | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:11:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # MeowGPT Readme
## Overview
MeowGPT, developed by CutyCat2000x, is a language model based on Llama with the checkpoint version 3.5. This model is designed to generate text in a conversational manner and can be used for various natural language processing tasks.
## Usage
### Loading the Model
To use MeowGPT, you can load it via the `transformers` library in Python using the following code:
```python
from transformers import LlamaTokenizer, AutoModelForCausalLM, AutoTokenizer
tokenizer = LlamaTokenizer.from_pretrained("cutycat2000x/MeowGPT-3.5")
model = AutoModelForCausalLM.from_pretrained("cutycat2000x/MeowGPT-3.5")
```
### Example Prompt
An example of how to prompt the model for generating text:
```python
{{ bos_token }}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content.strip() + eos_token }}{% endif %}{% endfor %}
```
The <s> and </s> are start and end tokens.
## About the Model
- **Base Model**: Llama + Mixtral
- **Checkpoint Version**: 3.5
- **Datasets Used**: Private
## Citation
If you use MeowGPT in your research or projects, please consider citing CutyCat2000x.
## Disclaimer
Please note that while MeowGPT is trained to assist in generating text based on given prompts, it may not always provide accurate or contextually appropriate responses. It's recommended to review and validate the generated content before usage in critical applications.
For more information or support, refer to the `transformers` library documentation or CutyCat2000x's resources. | {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["freeai", "conversational", "meowgpt", "gpt", "free", "opensource", "splittic", "ai"], "pipeline_tag": "text-generation", "widget": [{"text": "<s> [|User|] Hello World </s>[|Assistant|]"}]} | cutycat2000x/MeowGPT-3.5 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"freeai",
"conversational",
"meowgpt",
"gpt",
"free",
"opensource",
"splittic",
"ai",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:14:46+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #freeai #conversational #meowgpt #gpt #free #opensource #splittic #ai #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # MeowGPT Readme
## Overview
MeowGPT, developed by CutyCat2000x, is a language model based on Llama with the checkpoint version 3.5. This model is designed to generate text in a conversational manner and can be used for various natural language processing tasks.
## Usage
### Loading the Model
To use MeowGPT, you can load it via the 'transformers' library in Python using the following code:
### Example Prompt
An example of how to prompt the model for generating text:
The <s> and </s> are start and end tokens.
## About the Model
- Base Model: Llama + Mixtral
- Checkpoint Version: 3.5
- Datasets Used: Private
If you use MeowGPT in your research or projects, please consider citing CutyCat2000x.
## Disclaimer
Please note that while MeowGPT is trained to assist in generating text based on given prompts, it may not always provide accurate or contextually appropriate responses. It's recommended to review and validate the generated content before usage in critical applications.
For more information or support, refer to the 'transformers' library documentation or CutyCat2000x's resources. | [
"# MeowGPT Readme",
"## Overview\nMeowGPT, developed by CutyCat2000x, is a language model based on Llama with the checkpoint version 3.5. This model is designed to generate text in a conversational manner and can be used for various natural language processing tasks.",
"## Usage",
"### Loading the Model\nTo use MeowGPT, you can load it via the 'transformers' library in Python using the following code:",
"### Example Prompt\nAn example of how to prompt the model for generating text:\n\n\n\nThe <s> and </s> are start and end tokens.",
"## About the Model\n- Base Model: Llama + Mixtral\n- Checkpoint Version: 3.5\n- Datasets Used: Private\n\nIf you use MeowGPT in your research or projects, please consider citing CutyCat2000x.",
"## Disclaimer\nPlease note that while MeowGPT is trained to assist in generating text based on given prompts, it may not always provide accurate or contextually appropriate responses. It's recommended to review and validate the generated content before usage in critical applications.\n\nFor more information or support, refer to the 'transformers' library documentation or CutyCat2000x's resources."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #freeai #conversational #meowgpt #gpt #free #opensource #splittic #ai #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# MeowGPT Readme",
"## Overview\nMeowGPT, developed by CutyCat2000x, is a language model based on Llama with the checkpoint version 3.5. This model is designed to generate text in a conversational manner and can be used for various natural language processing tasks.",
"## Usage",
"### Loading the Model\nTo use MeowGPT, you can load it via the 'transformers' library in Python using the following code:",
"### Example Prompt\nAn example of how to prompt the model for generating text:\n\n\n\nThe <s> and </s> are start and end tokens.",
"## About the Model\n- Base Model: Llama + Mixtral\n- Checkpoint Version: 3.5\n- Datasets Used: Private\n\nIf you use MeowGPT in your research or projects, please consider citing CutyCat2000x.",
"## Disclaimer\nPlease note that while MeowGPT is trained to assist in generating text based on given prompts, it may not always provide accurate or contextually appropriate responses. It's recommended to review and validate the generated content before usage in critical applications.\n\nFor more information or support, refer to the 'transformers' library documentation or CutyCat2000x's resources."
] |
text-generation | transformers |
# CodeQwen1.5-7B-Chat - EXL2 8.0bpw
This is a 8.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 13.6136 |
| 7.0 | 13.6220 |
| 6.0 | 13.6524 |
| 5.0 | 13.7689 |
| 4.0 | 13.9466 |
| 3.5 | 14.2961 |
| 3.0 | 16.8038 |
| 2.75 | 16.9662 |
| 2.5 | 17.4515 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B-Chat"} | Dracones/CodeQwen1.5-7B-Chat_exl2_8.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"chat",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-16T20:15:53+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| CodeQwen1.5-7B-Chat - EXL2 8.0bpw
=================================
This is a 8.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B-Chat
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B-Chat - EXL2 7.0bpw
This is a 7.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 13.6136 |
| 7.0 | 13.6220 |
| 6.0 | 13.6524 |
| 5.0 | 13.7689 |
| 4.0 | 13.9466 |
| 3.5 | 14.2961 |
| 3.0 | 16.8038 |
| 2.75 | 16.9662 |
| 2.5 | 17.4515 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B-Chat"} | Dracones/CodeQwen1.5-7B-Chat_exl2_7.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"chat",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"7-bit",
"region:us"
] | null | 2024-04-16T20:17:11+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #7-bit #region-us
| CodeQwen1.5-7B-Chat - EXL2 7.0bpw
=================================
This is a 7.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B-Chat
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #7-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B-Chat - EXL2 6.0bpw
This is a 6.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 13.6136 |
| 7.0 | 13.6220 |
| 6.0 | 13.6524 |
| 5.0 | 13.7689 |
| 4.0 | 13.9466 |
| 3.5 | 14.2961 |
| 3.0 | 16.8038 |
| 2.75 | 16.9662 |
| 2.5 | 17.4515 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B-Chat"} | Dracones/CodeQwen1.5-7B-Chat_exl2_6.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"chat",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | null | 2024-04-16T20:18:21+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
| CodeQwen1.5-7B-Chat - EXL2 6.0bpw
=================================
This is a 6.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B-Chat
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B-Chat - EXL2 5.0bpw
This is a 5.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 13.6136 |
| 7.0 | 13.6220 |
| 6.0 | 13.6524 |
| 5.0 | 13.7689 |
| 4.0 | 13.9466 |
| 3.5 | 14.2961 |
| 3.0 | 16.8038 |
| 2.75 | 16.9662 |
| 2.5 | 17.4515 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B-Chat"} | Dracones/CodeQwen1.5-7B-Chat_exl2_5.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"chat",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"region:us"
] | null | 2024-04-16T20:19:27+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
| CodeQwen1.5-7B-Chat - EXL2 5.0bpw
=================================
This is a 5.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B-Chat
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B-Chat - EXL2 4.0bpw
This is a 4.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 13.6136 |
| 7.0 | 13.6220 |
| 6.0 | 13.6524 |
| 5.0 | 13.7689 |
| 4.0 | 13.9466 |
| 3.5 | 14.2961 |
| 3.0 | 16.8038 |
| 2.75 | 16.9662 |
| 2.5 | 17.4515 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B-Chat"} | Dracones/CodeQwen1.5-7B-Chat_exl2_4.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"chat",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-16T20:20:19+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| CodeQwen1.5-7B-Chat - EXL2 4.0bpw
=================================
This is a 4.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B-Chat
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Lugaborg/WaterBug | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:20:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_helpfulhelpful_gpt3_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr5e-06
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "dpo_helpfulhelpful_gpt3_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr5e-06", "results": []}]} | Holarissun/dpo_helpfulhelpful_gpt3_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr5e-06 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T20:21:05+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
|
# dpo_helpfulhelpful_gpt3_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr5e-06
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# dpo_helpfulhelpful_gpt3_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr5e-06\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"# dpo_helpfulhelpful_gpt3_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr5e-06\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dapt_plus_tapt_helpfulness_base_pretraining_model
This model is a fine-tuned version of [BigTMiami/amazon_pretraining_5M_model_corrected](https://huggingface.co/BigTMiami/amazon_pretraining_5M_model_corrected) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 21
- eval_batch_size: 21
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6784 | 1.0 | 232 | 1.5304 |
| 1.6014 | 2.0 | 465 | 1.5186 |
| 1.5847 | 3.0 | 697 | 1.5100 |
| 1.5492 | 4.0 | 930 | 1.4959 |
| 1.5369 | 5.0 | 1162 | 1.5022 |
| 1.5086 | 6.0 | 1395 | 1.4919 |
| 1.4953 | 7.0 | 1627 | 1.4770 |
| 1.4729 | 8.0 | 1860 | 1.4840 |
| 1.4612 | 9.0 | 2092 | 1.4719 |
| 1.4502 | 10.0 | 2325 | 1.4595 |
| 1.436 | 11.0 | 2557 | 1.4670 |
| 1.4178 | 12.0 | 2790 | 1.4709 |
| 1.4055 | 13.0 | 3022 | 1.4514 |
| 1.3951 | 14.0 | 3255 | 1.4595 |
| 1.3848 | 15.0 | 3487 | 1.4585 |
| 1.3678 | 16.0 | 3720 | 1.4752 |
| 1.3659 | 17.0 | 3952 | 1.4636 |
| 1.3523 | 18.0 | 4185 | 1.4515 |
| 1.3443 | 19.0 | 4417 | 1.4609 |
| 1.3285 | 20.0 | 4650 | 1.4590 |
| 1.3283 | 21.0 | 4882 | 1.4595 |
| 1.3109 | 22.0 | 5115 | 1.4490 |
| 1.3111 | 23.0 | 5347 | 1.4457 |
| 1.2964 | 24.0 | 5580 | 1.4543 |
| 1.2945 | 25.0 | 5812 | 1.4500 |
| 1.2792 | 26.0 | 6045 | 1.4537 |
| 1.2741 | 27.0 | 6277 | 1.4428 |
| 1.2603 | 28.0 | 6510 | 1.4508 |
| 1.2609 | 29.0 | 6742 | 1.4473 |
| 1.246 | 30.0 | 6975 | 1.4458 |
| 1.2436 | 31.0 | 7207 | 1.4473 |
| 1.2324 | 32.0 | 7440 | 1.4384 |
| 1.2282 | 33.0 | 7672 | 1.4368 |
| 1.2164 | 34.0 | 7905 | 1.4466 |
| 1.2146 | 35.0 | 8137 | 1.4460 |
| 1.2022 | 36.0 | 8370 | 1.4520 |
| 1.1991 | 37.0 | 8602 | 1.4509 |
| 1.191 | 38.0 | 8835 | 1.4412 |
| 1.1909 | 39.0 | 9067 | 1.4449 |
| 1.1777 | 40.0 | 9300 | 1.4521 |
| 1.1762 | 41.0 | 9532 | 1.4582 |
| 1.166 | 42.0 | 9765 | 1.4403 |
| 1.1618 | 43.0 | 9997 | 1.4484 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "BigTMiami/amazon_pretraining_5M_model_corrected", "model-index": [{"name": "dapt_plus_tapt_helpfulness_base_pretraining_model", "results": []}]} | BigTMiami/dapt_plus_tapt_helpfulness_base_pretraining_model | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:BigTMiami/amazon_pretraining_5M_model_corrected",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:21:07+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-BigTMiami/amazon_pretraining_5M_model_corrected #license-mit #autotrain_compatible #endpoints_compatible #region-us
| dapt\_plus\_tapt\_helpfulness\_base\_pretraining\_model
=======================================================
This model is a fine-tuned version of BigTMiami/amazon\_pretraining\_5M\_model\_corrected on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4446
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 21
* eval\_batch\_size: 21
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: linear
* num\_epochs: 100
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-BigTMiami/amazon_pretraining_5M_model_corrected #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# CodeQwen1.5-7B-Chat - EXL2 3.5bpw
This is a 3.5bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 13.6136 |
| 7.0 | 13.6220 |
| 6.0 | 13.6524 |
| 5.0 | 13.7689 |
| 4.0 | 13.9466 |
| 3.5 | 14.2961 |
| 3.0 | 16.8038 |
| 2.75 | 16.9662 |
| 2.5 | 17.4515 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B-Chat"} | Dracones/CodeQwen1.5-7B-Chat_exl2_3.5bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"chat",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:21:08+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| CodeQwen1.5-7B-Chat - EXL2 3.5bpw
=================================
This is a 3.5bpw EXL2 quant of Qwen/CodeQwen1.5-7B-Chat
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6137
- F1 Score: 0.6817
- Accuracy: 0.6831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6513 | 16.67 | 200 | 0.6198 | 0.6598 | 0.6595 |
| 0.5835 | 33.33 | 400 | 0.6152 | 0.6713 | 0.6737 |
| 0.5589 | 50.0 | 600 | 0.6264 | 0.6754 | 0.6765 |
| 0.534 | 66.67 | 800 | 0.6194 | 0.6849 | 0.6865 |
| 0.5123 | 83.33 | 1000 | 0.6301 | 0.6809 | 0.6820 |
| 0.4959 | 100.0 | 1200 | 0.6281 | 0.6846 | 0.6879 |
| 0.4856 | 116.67 | 1400 | 0.6362 | 0.6806 | 0.6810 |
| 0.4751 | 133.33 | 1600 | 0.6314 | 0.6807 | 0.6807 |
| 0.4694 | 150.0 | 1800 | 0.6404 | 0.6790 | 0.6796 |
| 0.4618 | 166.67 | 2000 | 0.6413 | 0.6777 | 0.6782 |
| 0.4565 | 183.33 | 2200 | 0.6525 | 0.6735 | 0.6748 |
| 0.4488 | 200.0 | 2400 | 0.6506 | 0.6798 | 0.6834 |
| 0.4421 | 216.67 | 2600 | 0.6527 | 0.6830 | 0.6834 |
| 0.4358 | 233.33 | 2800 | 0.6662 | 0.6774 | 0.6775 |
| 0.4303 | 250.0 | 3000 | 0.6751 | 0.6788 | 0.6796 |
| 0.4231 | 266.67 | 3200 | 0.6860 | 0.6733 | 0.6755 |
| 0.4186 | 283.33 | 3400 | 0.6822 | 0.6791 | 0.6793 |
| 0.4096 | 300.0 | 3600 | 0.7026 | 0.6675 | 0.6716 |
| 0.4037 | 316.67 | 3800 | 0.6953 | 0.6747 | 0.6751 |
| 0.3951 | 333.33 | 4000 | 0.7126 | 0.6753 | 0.6758 |
| 0.3872 | 350.0 | 4200 | 0.7182 | 0.6772 | 0.6789 |
| 0.3832 | 366.67 | 4400 | 0.7246 | 0.6702 | 0.6730 |
| 0.3755 | 383.33 | 4600 | 0.7310 | 0.6731 | 0.6751 |
| 0.3709 | 400.0 | 4800 | 0.7440 | 0.6768 | 0.6782 |
| 0.364 | 416.67 | 5000 | 0.7577 | 0.6745 | 0.6761 |
| 0.3597 | 433.33 | 5200 | 0.7783 | 0.6795 | 0.6813 |
| 0.3552 | 450.0 | 5400 | 0.7559 | 0.6725 | 0.6730 |
| 0.3493 | 466.67 | 5600 | 0.7786 | 0.6778 | 0.6782 |
| 0.3459 | 483.33 | 5800 | 0.7722 | 0.6713 | 0.6713 |
| 0.339 | 500.0 | 6000 | 0.8056 | 0.6779 | 0.6789 |
| 0.336 | 516.67 | 6200 | 0.7854 | 0.6726 | 0.6741 |
| 0.3327 | 533.33 | 6400 | 0.8095 | 0.6747 | 0.6751 |
| 0.3308 | 550.0 | 6600 | 0.8186 | 0.6746 | 0.6758 |
| 0.3243 | 566.67 | 6800 | 0.7991 | 0.6724 | 0.6730 |
| 0.3219 | 583.33 | 7000 | 0.7950 | 0.6720 | 0.6727 |
| 0.3194 | 600.0 | 7200 | 0.8116 | 0.6710 | 0.6716 |
| 0.3178 | 616.67 | 7400 | 0.8282 | 0.6705 | 0.6713 |
| 0.3144 | 633.33 | 7600 | 0.8148 | 0.6727 | 0.6734 |
| 0.3123 | 650.0 | 7800 | 0.8252 | 0.6706 | 0.6713 |
| 0.3106 | 666.67 | 8000 | 0.8117 | 0.6696 | 0.6709 |
| 0.3089 | 683.33 | 8200 | 0.8293 | 0.6702 | 0.6713 |
| 0.3062 | 700.0 | 8400 | 0.8343 | 0.6679 | 0.6692 |
| 0.3064 | 716.67 | 8600 | 0.8377 | 0.6745 | 0.6751 |
| 0.3031 | 733.33 | 8800 | 0.8386 | 0.6714 | 0.6723 |
| 0.3035 | 750.0 | 9000 | 0.8418 | 0.6702 | 0.6713 |
| 0.301 | 766.67 | 9200 | 0.8529 | 0.6704 | 0.6713 |
| 0.3008 | 783.33 | 9400 | 0.8379 | 0.6687 | 0.6692 |
| 0.2993 | 800.0 | 9600 | 0.8431 | 0.6691 | 0.6702 |
| 0.2982 | 816.67 | 9800 | 0.8467 | 0.6689 | 0.6702 |
| 0.2987 | 833.33 | 10000 | 0.8449 | 0.6683 | 0.6696 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T20:21:48+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_32768\_512\_30M-L32\_all
=====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6137
* F1 Score: 0.6817
* Accuracy: 0.6831
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question_generation_final
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0602
- eval_runtime: 301.5563
- eval_samples_per_second: 35.051
- eval_steps_per_second: 2.192
- epoch: 0.64
- step: 3500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "t5-small", "model-index": [{"name": "question_generation_final", "results": []}]} | nadika/question_generation_final | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:22:07+00:00 | [] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# question_generation_final
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0602
- eval_runtime: 301.5563
- eval_samples_per_second: 35.051
- eval_steps_per_second: 2.192
- epoch: 0.64
- step: 3500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# question_generation_final\n\nThis model is a fine-tuned version of t5-small on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0602\n- eval_runtime: 301.5563\n- eval_samples_per_second: 35.051\n- eval_steps_per_second: 2.192\n- epoch: 0.64\n- step: 3500",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# question_generation_final\n\nThis model is a fine-tuned version of t5-small on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0602\n- eval_runtime: 301.5563\n- eval_samples_per_second: 35.051\n- eval_steps_per_second: 2.192\n- epoch: 0.64\n- step: 3500",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Lugaborg/Procyote | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:25:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
translation | null |
# Model Summary
Based on the pre-quantized Gemma-7B model, this model has been fine-tuned on the Gemma-7B base model for Hinglish/English translations and completions using QLoRA. The training data includes two datasets: fnnerd/Baatcheet_Hinglish_English_Translation_Corpus and findnitai/english-to-hinglish.
This model, developed by L00ny Labs, has been open-sourced for research purposes. For commercial applications, please contact us, we might be able to provide more accurate models (details below).


# Uploaded
- **Developed by:** fnnerd
- **License:** apache-2.0
- **Finetuned from model :** gemma-7b
- **Finetuned from model :** L00ny Labs. Email: L00ny Labs <[email protected]>.
**Join our LLM experiments: https://www.linkedin.com/groups/13005614/**
Thanks to [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
# Run The Model
Jupyter notebook to run the model - https://github.com/Looony-Labs/ml-experiments/blob/main/Baatcheet%20/Baatcheet_7b_demo.ipynb
# Contact Us
Email: L00ny Labs <[email protected]>
LinkedIn: https://www.linkedin.com/in/sambodhi-khandelwal-42a18816/
Join our LLM experiments: https://www.linkedin.com/groups/13005614/
| {"language": ["en", "hi"], "license": "apache-2.0", "tags": ["hindi", "hinglish", "language", "translation"], "datasets": ["fnnerd/Baatcheet_Hinglish_English_Translation_Corpus", "findnitai/english-to-hinglish"], "base_model": "unsloth/gemma-7b-bnb-4bit"} | fnnerd/Baatcheet-7b | null | [
"safetensors",
"hindi",
"hinglish",
"language",
"translation",
"en",
"hi",
"dataset:fnnerd/Baatcheet_Hinglish_English_Translation_Corpus",
"dataset:findnitai/english-to-hinglish",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T20:25:30+00:00 | [] | [
"en",
"hi"
] | TAGS
#safetensors #hindi #hinglish #language #translation #en #hi #dataset-fnnerd/Baatcheet_Hinglish_English_Translation_Corpus #dataset-findnitai/english-to-hinglish #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #region-us
|
# Model Summary
Based on the pre-quantized Gemma-7B model, this model has been fine-tuned on the Gemma-7B base model for Hinglish/English translations and completions using QLoRA. The training data includes two datasets: fnnerd/Baatcheet_Hinglish_English_Translation_Corpus and findnitai/english-to-hinglish.
This model, developed by L00ny Labs, has been open-sourced for research purposes. For commercial applications, please contact us, we might be able to provide more accurate models (details below).
!baaatcheet_1.webp
!baatcheet_2.webp
# Uploaded
- Developed by: fnnerd
- License: apache-2.0
- Finetuned from model : gemma-7b
- Finetuned from model : L00ny Labs. Email: L00ny Labs <URL@URL>.
Join our LLM experiments: URL
Thanks to Unsloth and Huggingface's TRL library.
# Run The Model
Jupyter notebook to run the model - URL
# Contact Us
Email: L00ny Labs <URL@URL>
LinkedIn: URL
Join our LLM experiments: URL
| [
"# Model Summary\n\nBased on the pre-quantized Gemma-7B model, this model has been fine-tuned on the Gemma-7B base model for Hinglish/English translations and completions using QLoRA. The training data includes two datasets: fnnerd/Baatcheet_Hinglish_English_Translation_Corpus and findnitai/english-to-hinglish.\nThis model, developed by L00ny Labs, has been open-sourced for research purposes. For commercial applications, please contact us, we might be able to provide more accurate models (details below). \n!baaatcheet_1.webp\n!baatcheet_2.webp",
"# Uploaded\n\n- Developed by: fnnerd\n- License: apache-2.0\n- Finetuned from model : gemma-7b\n- Finetuned from model : L00ny Labs. Email: L00ny Labs <URL@URL>. \n\nJoin our LLM experiments: URL\n\nThanks to Unsloth and Huggingface's TRL library.",
"# Run The Model\nJupyter notebook to run the model - URL",
"# Contact Us\n\nEmail: L00ny Labs <URL@URL>\nLinkedIn: URL\n\nJoin our LLM experiments: URL"
] | [
"TAGS\n#safetensors #hindi #hinglish #language #translation #en #hi #dataset-fnnerd/Baatcheet_Hinglish_English_Translation_Corpus #dataset-findnitai/english-to-hinglish #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #region-us \n",
"# Model Summary\n\nBased on the pre-quantized Gemma-7B model, this model has been fine-tuned on the Gemma-7B base model for Hinglish/English translations and completions using QLoRA. The training data includes two datasets: fnnerd/Baatcheet_Hinglish_English_Translation_Corpus and findnitai/english-to-hinglish.\nThis model, developed by L00ny Labs, has been open-sourced for research purposes. For commercial applications, please contact us, we might be able to provide more accurate models (details below). \n!baaatcheet_1.webp\n!baatcheet_2.webp",
"# Uploaded\n\n- Developed by: fnnerd\n- License: apache-2.0\n- Finetuned from model : gemma-7b\n- Finetuned from model : L00ny Labs. Email: L00ny Labs <URL@URL>. \n\nJoin our LLM experiments: URL\n\nThanks to Unsloth and Huggingface's TRL library.",
"# Run The Model\nJupyter notebook to run the model - URL",
"# Contact Us\n\nEmail: L00ny Labs <URL@URL>\nLinkedIn: URL\n\nJoin our LLM experiments: URL"
] |
null | mlx |
# GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "apache-2.0", "tags": ["mlx"]} | GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0-mlx | null | [
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T20:27:59+00:00 | [] | [] | TAGS
#mlx #safetensors #qwen2 #license-apache-2.0 #region-us
|
# GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0']().
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #qwen2 #license-apache-2.0 #region-us \n",
"# GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-32B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: HuggingFaceH4/zephyr-7b-beta
# model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: abacusai/SystemChat
type: sharegpt
field_human: human
field_model: gpt
eval_sample_packing: false
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./model/Featherlite-Aurora-v0.2.1-beta
adapter: qlora
lora_model_dir:
# 16384 8192 4096 2048
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: mistral-code
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 8
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
main_process_port: 0
warmup_steps: 10
eval_steps: 20
save_steps:
debug:
# deepspeed:
deepspeed: deepspeed_configs/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# model/Featherlite-Aurora-v0.2.1-beta
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8259 | 0.0 | 1 | 0.8638 |
| 0.6755 | 0.07 | 20 | 0.7034 |
| 0.6606 | 0.15 | 40 | 0.6795 |
| 0.5661 | 0.22 | 60 | 0.6679 |
| 0.6299 | 0.29 | 80 | 0.6629 |
| 0.6353 | 0.37 | 100 | 0.6537 |
| 0.7353 | 0.44 | 120 | 0.6478 |
| 0.7021 | 0.51 | 140 | 0.6439 |
| 0.6998 | 0.59 | 160 | 0.6443 |
| 0.6126 | 0.66 | 180 | 0.6396 |
| 0.6476 | 0.73 | 200 | 0.6399 |
| 0.6688 | 0.81 | 220 | 0.6386 |
| 0.6245 | 0.88 | 240 | 0.6361 |
| 0.794 | 0.95 | 260 | 0.6355 |
| 0.5334 | 1.0 | 280 | 0.6348 |
| 0.4583 | 1.08 | 300 | 0.6496 |
| 0.4925 | 1.15 | 320 | 0.6543 |
| 0.4876 | 1.22 | 340 | 0.6528 |
| 0.5079 | 1.3 | 360 | 0.6525 |
| 0.5007 | 1.37 | 380 | 0.6577 |
| 0.4785 | 1.44 | 400 | 0.6475 |
| 0.5234 | 1.52 | 420 | 0.6545 |
| 0.5493 | 1.59 | 440 | 0.6541 |
| 0.5077 | 1.66 | 460 | 0.6534 |
| 0.5291 | 1.74 | 480 | 0.6536 |
| 0.4966 | 1.81 | 500 | 0.6500 |
| 0.5081 | 1.88 | 520 | 0.6482 |
| 0.499 | 1.96 | 540 | 0.6510 |
| 0.3433 | 2.01 | 560 | 0.6701 |
| 0.2965 | 2.08 | 580 | 0.7550 |
| 0.3244 | 2.15 | 600 | 0.7403 |
| 0.2948 | 2.23 | 620 | 0.7563 |
| 0.3348 | 2.3 | 640 | 0.7556 |
| 0.2793 | 2.37 | 660 | 0.7507 |
| 0.3042 | 2.45 | 680 | 0.7506 |
| 0.3371 | 2.52 | 700 | 0.7613 |
| 0.2843 | 2.59 | 720 | 0.7687 |
| 0.3549 | 2.67 | 740 | 0.7519 |
| 0.3165 | 2.74 | 760 | 0.7493 |
| 0.2772 | 2.81 | 780 | 0.7542 |
| 0.3155 | 2.89 | 800 | 0.7594 |
| 0.4343 | 2.96 | 820 | 0.7600 |
| 0.1527 | 3.01 | 840 | 0.8169 |
| 0.1162 | 3.08 | 860 | 0.9550 |
| 0.1593 | 3.16 | 880 | 0.9190 |
| 0.1656 | 3.23 | 900 | 0.9182 |
| 0.1759 | 3.3 | 920 | 0.9430 |
| 0.1531 | 3.38 | 940 | 0.9248 |
| 0.1429 | 3.45 | 960 | 0.9281 |
| 0.1725 | 3.52 | 980 | 0.9311 |
| 0.1484 | 3.6 | 1000 | 0.9215 |
| 0.151 | 3.67 | 1020 | 0.9305 |
| 0.169 | 3.74 | 1040 | 0.9258 |
| 0.1615 | 3.82 | 1060 | 0.9273 |
| 0.1528 | 3.89 | 1080 | 0.9307 |
| 0.156 | 3.96 | 1100 | 0.9432 |
| 0.0579 | 4.01 | 1120 | 1.0395 |
| 0.0727 | 4.09 | 1140 | 1.0942 |
| 0.0753 | 4.16 | 1160 | 1.1173 |
| 0.0657 | 4.23 | 1180 | 1.1419 |
| 0.0705 | 4.31 | 1200 | 1.1434 |
| 0.0812 | 4.38 | 1220 | 1.1355 |
| 0.0595 | 4.45 | 1240 | 1.1423 |
| 0.0892 | 4.53 | 1260 | 1.1687 |
| 0.0582 | 4.6 | 1280 | 1.1695 |
| 0.0913 | 4.67 | 1300 | 1.1330 |
| 0.0936 | 4.75 | 1320 | 1.1552 |
| 0.0894 | 4.82 | 1340 | 1.1703 |
| 0.0822 | 4.89 | 1360 | 1.1832 |
| 0.0758 | 4.97 | 1380 | 1.1691 |
| 0.0358 | 5.02 | 1400 | 1.2655 |
| 0.0256 | 5.09 | 1420 | 1.3611 |
| 0.0306 | 5.16 | 1440 | 1.4057 |
| 0.0264 | 5.24 | 1460 | 1.3861 |
| 0.0299 | 5.31 | 1480 | 1.4160 |
| 0.0235 | 5.38 | 1500 | 1.4188 |
| 0.0404 | 5.46 | 1520 | 1.4194 |
| 0.0425 | 5.53 | 1540 | 1.4067 |
| 0.0329 | 5.6 | 1560 | 1.4141 |
| 0.0242 | 5.68 | 1580 | 1.4337 |
| 0.0294 | 5.75 | 1600 | 1.4179 |
| 0.0263 | 5.82 | 1620 | 1.4280 |
| 0.0338 | 5.9 | 1640 | 1.4181 |
| 0.0259 | 5.97 | 1660 | 1.4228 |
| 0.0131 | 6.02 | 1680 | 1.4636 |
| 0.0138 | 6.1 | 1700 | 1.5978 |
| 0.0117 | 6.17 | 1720 | 1.6052 |
| 0.0117 | 6.24 | 1740 | 1.6123 |
| 0.0143 | 6.32 | 1760 | 1.6160 |
| 0.0154 | 6.39 | 1780 | 1.6216 |
| 0.0119 | 6.46 | 1800 | 1.6152 |
| 0.0156 | 6.53 | 1820 | 1.6225 |
| 0.0152 | 6.61 | 1840 | 1.6286 |
| 0.0141 | 6.68 | 1860 | 1.6322 |
| 0.0148 | 6.75 | 1880 | 1.6336 |
| 0.0106 | 6.83 | 1900 | 1.6312 |
| 0.0121 | 6.9 | 1920 | 1.6351 |
| 0.0097 | 6.97 | 1940 | 1.6372 |
| 0.0077 | 7.03 | 1960 | 1.6420 |
| 0.0095 | 7.1 | 1980 | 1.6641 |
| 0.0087 | 7.17 | 2000 | 1.6786 |
| 0.0094 | 7.25 | 2020 | 1.6866 |
| 0.0105 | 7.32 | 2040 | 1.6918 |
| 0.0108 | 7.39 | 2060 | 1.6965 |
| 0.0099 | 7.47 | 2080 | 1.6991 |
| 0.0089 | 7.54 | 2100 | 1.7002 |
| 0.008 | 7.61 | 2120 | 1.7010 |
| 0.0125 | 7.68 | 2140 | 1.7023 |
| 0.0098 | 7.76 | 2160 | 1.7015 |
| 0.0105 | 7.83 | 2180 | 1.7011 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceH4/zephyr-7b-beta", "model-index": [{"name": "model/Featherlite-Aurora-v0.2.1-beta", "results": []}]} | hvadaparty/Featherlite-Aurora-v0.2.1-beta | null | [
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"4-bit",
"region:us"
] | null | 2024-04-16T20:28:30+00:00 | [] | [] | TAGS
#peft #safetensors #mistral #generated_from_trainer #base_model-HuggingFaceH4/zephyr-7b-beta #license-mit #4-bit #region-us
| <img src="URL alt="Built with Axolotl" width="200" height="32"/>
See axolotl config
axolotl version: '0.4.0'
model/Featherlite-Aurora-v0.2.1-beta
====================================
This model is a fine-tuned version of HuggingFaceH4/zephyr-7b-beta on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7011
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* distributed\_type: multi-GPU
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 10
* num\_epochs: 8
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0.dev0
* Pytorch 2.2.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#peft #safetensors #mistral #generated_from_trainer #base_model-HuggingFaceH4/zephyr-7b-beta #license-mit #4-bit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
text-generation | transformers |
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for OLMo 7B
<!-- Provide a quick summary of what the model is/does. -->
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
We release all code, checkpoints, logs (coming soon), and details involved in training these models.
## Model Details
The core models released in this batch are the following:
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|--------|---------|-------------|-----------------|----------------|
| [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) | 3 Trillion |16 | 2048 | 16 | 2048 |
| [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
| [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 |
We are releasing many checkpoints for these models, for every 1000 traing steps.
The naming convention is `step1000-tokens4B`.
In particular, we focus on four revisions of the 7B models:
| Name | HF Repo | Model Revision | Tokens | Note |
|------------|---------|----------------|-------------------|------|
|OLMo 7B| [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|`main`| 2.5T|The base OLMo 7B model|
|OLMo 7B (not annealed)|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|step556000-tokens2460B|2.5T| learning rate not annealed to 0|
|OLMo 7B-2T|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)| step452000-tokens2000B |2T| OLMo checkpoint at 2T tokens|
|OLMo-7B-Twin-2T|[allenai/OLMo-7B-Twin-2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T)|`main`|2T| Twin version on different hardware|
To load a specific model revision with HuggingFace, simply add the argument `revision`:
```bash
import hf_olmo # pip install ai2-olmo
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B", revision="step1000-tokens4B")
```
All revisions/branches are listed in the file `revisions.txt`.
Or, you can access all the revisions for the models via the following code snippet:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("allenai/OLMo-7B")
branches = [b.name for b in out.branches]
```
A few revisions were lost due to an error, but the vast majority are present.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Allen Institute for AI (AI2)
- **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
- **Date cutoff:** Feb./March 2023 based on Dolma dataset version.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
- Evaluation code: https://github.com/allenai/OLMo-Eval
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** [Link](https://arxiv.org/abs/2402.00838)
- **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580
- **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-7B/reports/OLMo-7B--Vmlldzo2NzQyMzk5
<!-- - **Press release:** TODO -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Inference
Quickly get inference running with the following required installation:
```bash
pip install ai2-olmo
```
Now, proceed as usual with HuggingFace:
```python
import hf_olmo
from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is the first step to build natural language generation...'
```
Alternatively, with the pipeline abstraction:
```python
import hf_olmo
from transformers import pipeline
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-7B")
print(olmo_pipe("Language modeling is "))
>> 'Language modeling is a branch of natural language processing that aims to...'
```
Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.
```bash
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: hf_olmo. Run `pip install hf_olmo`
```
### Fine-tuning
Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
1. Fine-tune with the OLMo repository:
```bash
torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
--data.paths=[{path_to_data}/input_ids.npy] \
--data.label_mask_paths=[{path_to_data}/label_mask.npy] \
--load_path={path_to_checkpoint} \
--reset_trainer_state
```
For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo?tab=readme-ov-file#fine-tuning).
2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are [here](https://github.com/allenai/open-instruct).
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Core model results for the 7B model are found below.
| | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) |
| --------------------------------- | -------- | ---------- | --------- | ------ | ------- |
| arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 |
| arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 |
| boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 |
| copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 |
| hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 |
| openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 |
| piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 |
| sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 |
| winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 |
| **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 |
| truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 |
| MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 |
| GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) |
| **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 |
And for the 1B model:
| task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- |
| arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 |
| arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 |
| boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 |
| copa | 50 | 84 | 72 | 78 | 79 |
| hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 |
| openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 |
| piqa | 50 | 74 | 69.1 | 71.1 | 73.7 |
| sciq | 25 | 94.7 | 86 | 90.5 | 88.1 |
| winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 |
| Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 |
\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
## Model Details
### Data
For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
### Architecture
OLMo 7B architecture with peer models for comparison.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
|------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
| d_model | 4096 | 4096 | 4096 | 4544 | 4096 |
| num heads | 32 | 32 | 32 | 71 | 16 |
| num layers | 32 | 32 | 32 | 32 | 32 |
| MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 |
| LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN |
| pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE |
| attention variant | full | GQA | full | MQA | MQA |
| biases | none | none | in LN only | in LN only | none |
| block type | sequential | sequential | sequential | parallel | parallel |
| activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU |
| sequence length | 2048 | 4096 | 2048 | 2048 | 2048 |
| batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 |
| batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M |
| weight tying | no | no | no | no | yes |
### Hyperparameters
AdamW optimizer parameters are shown below.
| Size | Peak LR | Betas | Epsilon | Weight Decay |
|------|------------|-----------------|-------------|--------------|
| 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
| 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 |
Optimizer settings comparison with peer models.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
|-----------------------|------------------|---------------------|--------------------|--------------------|
| warmup steps | 5000 | 2000 | 2000 | 1000 |
| peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 |
| minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 |
| weight decay | 0.1 | 0.1 | 0.1 | 0.1 |
| beta1 | 0.9 | 0.9 | 0.9 | 0.99 |
| beta2 | 0.95 | 0.95 | 0.95 | 0.999 |
| epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 |
| LR schedule | linear | cosine | cosine | cosine |
| gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 |
| gradient reduce dtype | FP32 | FP32 | FP32 | BF16 |
| optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 |
## Environmental Impact
OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.
| | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) |
|-----------|------------|-----------------------------|--------------------------------|---------------------------|
| OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* |
| OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 |
## Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
## Citation
**BibTeX:**
```
@article{Groeneveld2023OLMo,
title={OLMo: Accelerating the Science of Language Models},
author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
journal={Preprint},
year={2024}
}
```
**APA:**
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
## Model Card Contact
For errors in this model card, contact Nathan or Akshita, `{nathanl, akshitab} at allenai dot org`. | {"language": ["en"], "license": "apache-2.0", "datasets": ["allenai/dolma"]} | monology/olmo-git | null | [
"transformers",
"safetensors",
"olmo",
"text-generation",
"custom_code",
"en",
"dataset:allenai/dolma",
"arxiv:2402.00838",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null | 2024-04-16T20:29:07+00:00 | [
"2402.00838",
"2302.13971"
] | [
"en"
] | TAGS
#transformers #safetensors #olmo #text-generation #custom_code #en #dataset-allenai/dolma #arxiv-2402.00838 #arxiv-2302.13971 #license-apache-2.0 #autotrain_compatible #region-us
| <img src="URL alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Model Card for OLMo 7B
======================
OLMo is a series of Open Language Models designed to enable the science of language models.
The OLMo models are trained on the Dolma dataset.
We release all code, checkpoints, logs (coming soon), and details involved in training these models.
Model Details
-------------
The core models released in this batch are the following:
We are releasing many checkpoints for these models, for every 1000 traing steps.
The naming convention is 'step1000-tokens4B'.
In particular, we focus on four revisions of the 7B models:
To load a specific model revision with HuggingFace, simply add the argument 'revision':
All revisions/branches are listed in the file 'URL'.
Or, you can access all the revisions for the models via the following code snippet:
A few revisions were lost due to an error, but the vast majority are present.
### Model Description
* Developed by: Allen Institute for AI (AI2)
* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
* Model type: a Transformer style autoregressive language model.
* Language(s) (NLP): English
* License: The code and model are released under Apache 2.0.
* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'
* Date cutoff: Feb./March 2023 based on Dolma dataset version.
### Model Sources
* Project Page: URL
* Repositories:
+ Core repo (training, inference, fine-tuning etc.): URL
+ Evaluation code: URL
+ Further fine-tuning code: URL
* Paper: Link
* Technical blog post: URL
* W&B Logs: URL
Uses
----
### Inference
Quickly get inference running with the following required installation:
Now, proceed as usual with HuggingFace:
Alternatively, with the pipeline abstraction:
Or, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\_pretrained("allenai/OLMo-7B", torch\_dtype=torch.float16, load\_in\_8bit=True)' (requires 'bitsandbytes').
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\_ids.to('cuda')' to avoid potential issues.
Note, you may see the following error if 'ai2-olmo' is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.
### Fine-tuning
Model fine-tuning can be done from the final checkpoint (the 'main' revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
1. Fine-tune with the OLMo repository:
For more documentation, see the GitHub readme.
2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.
Evaluation
----------
Core model results for the 7B model are found below.
And for the 1B model:
\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
Model Details
-------------
### Data
For training data details, please see the Dolma documentation.
### Architecture
OLMo 7B architecture with peer models for comparison.
### Hyperparameters
AdamW optimizer parameters are shown below.
Optimizer settings comparison with peer models.
Environmental Impact
--------------------
OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.
Bias, Risks, and Limitations
----------------------------
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
BibTeX:
APA:
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
Model Card Contact
------------------
For errors in this model card, contact Nathan or Akshita, '{nathanl, akshitab} at allenai dot org'.
| [
"### Model Description\n\n\n* Developed by: Allen Institute for AI (AI2)\n* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW\n* Model type: a Transformer style autoregressive language model.\n* Language(s) (NLP): English\n* License: The code and model are released under Apache 2.0.\n* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'\n* Date cutoff: Feb./March 2023 based on Dolma dataset version.",
"### Model Sources\n\n\n* Project Page: URL\n* Repositories:\n\t+ Core repo (training, inference, fine-tuning etc.): URL\n\t+ Evaluation code: URL\n\t+ Further fine-tuning code: URL\n* Paper: Link\n* Technical blog post: URL\n* W&B Logs: URL\n\n\nUses\n----",
"### Inference\n\n\nQuickly get inference running with the following required installation:\n\n\nNow, proceed as usual with HuggingFace:\n\n\nAlternatively, with the pipeline abstraction:\n\n\nOr, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\\_pretrained(\"allenai/OLMo-7B\", torch\\_dtype=torch.float16, load\\_in\\_8bit=True)' (requires 'bitsandbytes').\nThe quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\\_ids.to('cuda')' to avoid potential issues.\n\n\nNote, you may see the following error if 'ai2-olmo' is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.",
"### Fine-tuning\n\n\nModel fine-tuning can be done from the final checkpoint (the 'main' revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.\n\n\n1. Fine-tune with the OLMo repository:\n\n\nFor more documentation, see the GitHub readme.\n\n\n2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.\n\n\nEvaluation\n----------\n\n\nCore model results for the 7B model are found below.\n\n\n\nAnd for the 1B model:\n\n\n\n\\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.\n\n\nModel Details\n-------------",
"### Data\n\n\nFor training data details, please see the Dolma documentation.",
"### Architecture\n\n\nOLMo 7B architecture with peer models for comparison.",
"### Hyperparameters\n\n\nAdamW optimizer parameters are shown below.\n\n\n\nOptimizer settings comparison with peer models.\n\n\n\nEnvironmental Impact\n--------------------\n\n\nOLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.\nA summary of the environmental impact. Further details are available in the paper.\n\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nLike any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.\nSuch content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.\n\n\nOtherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.\n\n\nBibTeX:\n\n\nAPA:\n\n\nGroeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.\n\n\nModel Card Contact\n------------------\n\n\nFor errors in this model card, contact Nathan or Akshita, '{nathanl, akshitab} at allenai dot org'."
] | [
"TAGS\n#transformers #safetensors #olmo #text-generation #custom_code #en #dataset-allenai/dolma #arxiv-2402.00838 #arxiv-2302.13971 #license-apache-2.0 #autotrain_compatible #region-us \n",
"### Model Description\n\n\n* Developed by: Allen Institute for AI (AI2)\n* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW\n* Model type: a Transformer style autoregressive language model.\n* Language(s) (NLP): English\n* License: The code and model are released under Apache 2.0.\n* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'\n* Date cutoff: Feb./March 2023 based on Dolma dataset version.",
"### Model Sources\n\n\n* Project Page: URL\n* Repositories:\n\t+ Core repo (training, inference, fine-tuning etc.): URL\n\t+ Evaluation code: URL\n\t+ Further fine-tuning code: URL\n* Paper: Link\n* Technical blog post: URL\n* W&B Logs: URL\n\n\nUses\n----",
"### Inference\n\n\nQuickly get inference running with the following required installation:\n\n\nNow, proceed as usual with HuggingFace:\n\n\nAlternatively, with the pipeline abstraction:\n\n\nOr, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\\_pretrained(\"allenai/OLMo-7B\", torch\\_dtype=torch.float16, load\\_in\\_8bit=True)' (requires 'bitsandbytes').\nThe quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\\_ids.to('cuda')' to avoid potential issues.\n\n\nNote, you may see the following error if 'ai2-olmo' is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.",
"### Fine-tuning\n\n\nModel fine-tuning can be done from the final checkpoint (the 'main' revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.\n\n\n1. Fine-tune with the OLMo repository:\n\n\nFor more documentation, see the GitHub readme.\n\n\n2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.\n\n\nEvaluation\n----------\n\n\nCore model results for the 7B model are found below.\n\n\n\nAnd for the 1B model:\n\n\n\n\\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.\n\n\nModel Details\n-------------",
"### Data\n\n\nFor training data details, please see the Dolma documentation.",
"### Architecture\n\n\nOLMo 7B architecture with peer models for comparison.",
"### Hyperparameters\n\n\nAdamW optimizer parameters are shown below.\n\n\n\nOptimizer settings comparison with peer models.\n\n\n\nEnvironmental Impact\n--------------------\n\n\nOLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.\nA summary of the environmental impact. Further details are available in the paper.\n\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nLike any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.\nSuch content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.\n\n\nOtherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.\n\n\nBibTeX:\n\n\nAPA:\n\n\nGroeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.\n\n\nModel Card Contact\n------------------\n\n\nFor errors in this model card, contact Nathan or Akshita, '{nathanl, akshitab} at allenai dot org'."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_shp1_dpo5
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3217
- Rewards/chosen: -0.2355
- Rewards/rejected: -0.1439
- Rewards/accuracies: 0.5
- Rewards/margins: -0.0916
- Logps/rejected: -232.6779
- Logps/chosen: -231.7293
- Logits/rejected: -0.7538
- Logits/chosen: -0.6906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0133 | 2.67 | 100 | 1.4101 | 1.9038 | 1.9347 | 0.4900 | -0.0309 | -228.5207 | -227.4508 | -0.7876 | -0.7426 |
| 0.0129 | 5.33 | 200 | 1.8574 | -0.4863 | -1.0084 | 0.5200 | 0.5221 | -234.4071 | -232.2310 | -0.7312 | -0.6995 |
| 0.0238 | 8.0 | 300 | 2.0008 | -1.7671 | -1.9006 | 0.5500 | 0.1335 | -236.1914 | -234.7925 | -0.5278 | -0.4564 |
| 0.0 | 10.67 | 400 | 2.3318 | -0.1632 | -0.0429 | 0.5 | -0.1202 | -232.4761 | -231.5847 | -0.7531 | -0.6899 |
| 0.0 | 13.33 | 500 | 2.3339 | -0.2064 | -0.0831 | 0.4900 | -0.1232 | -232.5565 | -231.6711 | -0.7531 | -0.6898 |
| 0.0 | 16.0 | 600 | 2.3248 | -0.2171 | -0.1093 | 0.5 | -0.1078 | -232.6089 | -231.6927 | -0.7533 | -0.6905 |
| 0.0 | 18.67 | 700 | 2.3383 | -0.2611 | -0.1205 | 0.5 | -0.1407 | -232.6312 | -231.7806 | -0.7531 | -0.6900 |
| 0.0 | 21.33 | 800 | 2.3166 | -0.2398 | -0.1509 | 0.5 | -0.0890 | -232.6919 | -231.7380 | -0.7534 | -0.6906 |
| 0.0 | 24.0 | 900 | 2.3279 | -0.2442 | -0.1312 | 0.5 | -0.1130 | -232.6526 | -231.7468 | -0.7537 | -0.6908 |
| 0.0 | 26.67 | 1000 | 2.3217 | -0.2355 | -0.1439 | 0.5 | -0.0916 | -232.6779 | -231.7293 | -0.7538 | -0.6906 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_shp1_dpo5", "results": []}]} | guoyu-zhang/model_shp1_dpo5 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-16T20:30:33+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_shp1\_dpo5
=================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3217
* Rewards/chosen: -0.2355
* Rewards/rejected: -0.1439
* Rewards/accuracies: 0.5
* Rewards/margins: -0.0916
* Logps/rejected: -232.6779
* Logps/chosen: -231.7293
* Logits/rejected: -0.7538
* Logits/chosen: -0.6906
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-PixelCopter-v02", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "16.90 +/- 10.67", "name": "mean_reward", "verified": false}]}]}]} | lacknerm/Reinforce-PixelCopter-v02 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-16T20:31:42+00:00 | [] | [] | TAGS
#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing Pixelcopter-PLE-v0
This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="eulpicard/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.52 +/- 2.73", "name": "mean_reward", "verified": false}]}]}]} | eulpicard/q-Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-16T20:32:06+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
null | transformers |
# LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF
This model was converted to GGUF format from [`LeroyDyer/Mixtral_AI_Cyber_Child`](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_Child) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_Child) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF --model mixtral_ai_cyber_child.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF --model mixtral_ai_cyber_child.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral_ai_cyber_child.Q4_K_M.gguf -n 128
``` | {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": []} | LeroyDyer/Mixtral_AI_Cyber_Child-Q4 | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:32:59+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #en #license-mit #endpoints_compatible #region-us
|
# LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF
This model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_Cyber_Child' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_Cyber_Child' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #en #license-mit #endpoints_compatible #region-us \n",
"# LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_Cyber_Child' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# CodeQwen1.5-7B-Chat - EXL2 3.0bpw
This is a 3.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 13.6136 |
| 7.0 | 13.6220 |
| 6.0 | 13.6524 |
| 5.0 | 13.7689 |
| 4.0 | 13.9466 |
| 3.5 | 14.2961 |
| 3.0 | 16.8038 |
| 2.75 | 16.9662 |
| 2.5 | 17.4515 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B-Chat"} | Dracones/CodeQwen1.5-7B-Chat_exl2_3.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"chat",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"region:us"
] | null | 2024-04-16T20:33:49+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
| CodeQwen1.5-7B-Chat - EXL2 3.0bpw
=================================
This is a 3.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B-Chat
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B-Chat - EXL2 2.75bpw
This is a 2.75bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 13.6136 |
| 7.0 | 13.6220 |
| 6.0 | 13.6524 |
| 5.0 | 13.7689 |
| 4.0 | 13.9466 |
| 3.5 | 14.2961 |
| 3.0 | 16.8038 |
| 2.75 | 16.9662 |
| 2.5 | 17.4515 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B-Chat"} | Dracones/CodeQwen1.5-7B-Chat_exl2_2.75bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"chat",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:34:25+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| CodeQwen1.5-7B-Chat - EXL2 2.75bpw
==================================
This is a 2.75bpw EXL2 quant of Qwen/CodeQwen1.5-7B-Chat
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B-Chat - EXL2 2.5bpw
This is a 2.5bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 13.6136 |
| 7.0 | 13.6220 |
| 6.0 | 13.6524 |
| 5.0 | 13.7689 |
| 4.0 | 13.9466 |
| 3.5 | 14.2961 |
| 3.0 | 16.8038 |
| 2.75 | 16.9662 |
| 2.5 | 17.4515 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B-Chat"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B-Chat"} | Dracones/CodeQwen1.5-7B-Chat_exl2_2.5bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"chat",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:35:03+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| CodeQwen1.5-7B-Chat - EXL2 2.5bpw
=================================
This is a 2.5bpw EXL2 quant of Qwen/CodeQwen1.5-7B-Chat
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #chat #conversational #en #base_model-Qwen/CodeQwen1.5-7B-Chat #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | alexakkol/bge-m3-nowrep | null | [
"sentence-transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:35:34+00:00 | [] | [] | TAGS
#sentence-transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation | transformers |
# Spaetzle-v67-7b
Spaetzle-v67-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [cstr/Spaetzle-v53-7b](https://huggingface.co/cstr/Spaetzle-v53-7b)
* [cstr/Spaetzle-v55-7b](https://huggingface.co/cstr/Spaetzle-v55-7b)
## 🧩 Configuration
```yaml
models:
- model: cstr/Spaetzle-v66-7b
# no parameters necessary for base model
- model: cstr/Spaetzle-v53-7b
parameters:
density: 0.60
weight: 0.30
- model: cstr/Spaetzle-v55-7b
parameters:
density: 0.65
weight: 0.30
merge_method: dare_ties
base_model: cstr/Spaetzle-v66-7b
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v67-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "cstr/Spaetzle-v53-7b", "cstr/Spaetzle-v55-7b"], "base_model": ["cstr/Spaetzle-v53-7b", "cstr/Spaetzle-v55-7b"]} | cstr/Spaetzle-v67-7b | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"cstr/Spaetzle-v53-7b",
"cstr/Spaetzle-v55-7b",
"conversational",
"base_model:cstr/Spaetzle-v53-7b",
"base_model:cstr/Spaetzle-v55-7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:41:37+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #cstr/Spaetzle-v53-7b #cstr/Spaetzle-v55-7b #conversational #base_model-cstr/Spaetzle-v53-7b #base_model-cstr/Spaetzle-v55-7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Spaetzle-v67-7b
Spaetzle-v67-7b is a merge of the following models using LazyMergekit:
* cstr/Spaetzle-v53-7b
* cstr/Spaetzle-v55-7b
## Configuration
## Usage
| [
"# Spaetzle-v67-7b\n\nSpaetzle-v67-7b is a merge of the following models using LazyMergekit:\n* cstr/Spaetzle-v53-7b\n* cstr/Spaetzle-v55-7b",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #cstr/Spaetzle-v53-7b #cstr/Spaetzle-v55-7b #conversational #base_model-cstr/Spaetzle-v53-7b #base_model-cstr/Spaetzle-v55-7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Spaetzle-v67-7b\n\nSpaetzle-v67-7b is a merge of the following models using LazyMergekit:\n* cstr/Spaetzle-v53-7b\n* cstr/Spaetzle-v55-7b",
"## Configuration",
"## Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6628
- F1 Score: 0.5947
- Accuracy: 0.6045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6739 | 15.38 | 200 | 0.6671 | 0.6025 | 0.6098 |
| 0.6313 | 30.77 | 400 | 0.6854 | 0.5836 | 0.5928 |
| 0.6108 | 46.15 | 600 | 0.7110 | 0.5828 | 0.5827 |
| 0.5909 | 61.54 | 800 | 0.7288 | 0.5774 | 0.5811 |
| 0.5735 | 76.92 | 1000 | 0.7417 | 0.5786 | 0.5836 |
| 0.5628 | 92.31 | 1200 | 0.7428 | 0.5761 | 0.5783 |
| 0.5538 | 107.69 | 1400 | 0.7542 | 0.5762 | 0.5821 |
| 0.548 | 123.08 | 1600 | 0.7438 | 0.5772 | 0.5830 |
| 0.5418 | 138.46 | 1800 | 0.7701 | 0.5742 | 0.5802 |
| 0.5363 | 153.85 | 2000 | 0.7652 | 0.5769 | 0.5818 |
| 0.5306 | 169.23 | 2200 | 0.7780 | 0.5775 | 0.5814 |
| 0.5265 | 184.62 | 2400 | 0.7778 | 0.5731 | 0.5783 |
| 0.5186 | 200.0 | 2600 | 0.8005 | 0.5704 | 0.5751 |
| 0.5117 | 215.38 | 2800 | 0.8318 | 0.5723 | 0.5792 |
| 0.5053 | 230.77 | 3000 | 0.8126 | 0.5697 | 0.5751 |
| 0.4952 | 246.15 | 3200 | 0.8568 | 0.5663 | 0.5720 |
| 0.4892 | 261.54 | 3400 | 0.8126 | 0.5603 | 0.5666 |
| 0.482 | 276.92 | 3600 | 0.8204 | 0.5674 | 0.5698 |
| 0.475 | 292.31 | 3800 | 0.8571 | 0.5718 | 0.5748 |
| 0.4692 | 307.69 | 4000 | 0.8748 | 0.5647 | 0.5694 |
| 0.4631 | 323.08 | 4200 | 0.8509 | 0.5682 | 0.5717 |
| 0.4561 | 338.46 | 4400 | 0.8913 | 0.5645 | 0.5666 |
| 0.4506 | 353.85 | 4600 | 0.9121 | 0.5664 | 0.5688 |
| 0.4456 | 369.23 | 4800 | 0.8971 | 0.5696 | 0.5704 |
| 0.4397 | 384.62 | 5000 | 0.8876 | 0.5674 | 0.5682 |
| 0.4346 | 400.0 | 5200 | 0.9151 | 0.5691 | 0.5691 |
| 0.4285 | 415.38 | 5400 | 0.9341 | 0.5672 | 0.5701 |
| 0.4241 | 430.77 | 5600 | 0.9155 | 0.5613 | 0.5650 |
| 0.4194 | 446.15 | 5800 | 0.9312 | 0.5670 | 0.5676 |
| 0.4131 | 461.54 | 6000 | 0.9507 | 0.5661 | 0.5688 |
| 0.4106 | 476.92 | 6200 | 0.9497 | 0.5626 | 0.5641 |
| 0.4039 | 492.31 | 6400 | 0.9397 | 0.5639 | 0.5653 |
| 0.4009 | 507.69 | 6600 | 0.9700 | 0.5743 | 0.5758 |
| 0.3957 | 523.08 | 6800 | 0.9703 | 0.5693 | 0.5713 |
| 0.3934 | 538.46 | 7000 | 0.9780 | 0.5722 | 0.5735 |
| 0.3896 | 553.85 | 7200 | 0.9689 | 0.5695 | 0.5701 |
| 0.387 | 569.23 | 7400 | 0.9852 | 0.5775 | 0.5789 |
| 0.3839 | 584.62 | 7600 | 1.0056 | 0.5752 | 0.5773 |
| 0.3807 | 600.0 | 7800 | 0.9820 | 0.5757 | 0.5780 |
| 0.3786 | 615.38 | 8000 | 0.9849 | 0.5671 | 0.5685 |
| 0.3754 | 630.77 | 8200 | 0.9832 | 0.5736 | 0.5761 |
| 0.3723 | 646.15 | 8400 | 1.0144 | 0.5740 | 0.5754 |
| 0.3701 | 661.54 | 8600 | 0.9928 | 0.5722 | 0.5729 |
| 0.369 | 676.92 | 8800 | 0.9911 | 0.5746 | 0.5761 |
| 0.3673 | 692.31 | 9000 | 1.0067 | 0.5688 | 0.5710 |
| 0.3655 | 707.69 | 9200 | 1.0117 | 0.5719 | 0.5739 |
| 0.3631 | 723.08 | 9400 | 1.0031 | 0.5751 | 0.5767 |
| 0.3645 | 738.46 | 9600 | 1.0056 | 0.5715 | 0.5726 |
| 0.3624 | 753.85 | 9800 | 1.0083 | 0.5709 | 0.5729 |
| 0.3619 | 769.23 | 10000 | 1.0100 | 0.5715 | 0.5735 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T20:42:14+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_32768\_512\_30M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6628
* F1 Score: 0.5947
* Accuracy: 0.6045
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_usp2_dpo9
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6846
- Rewards/chosen: -0.5731
- Rewards/rejected: -5.6737
- Rewards/accuracies: 0.6400
- Rewards/margins: 5.1005
- Logps/rejected: -120.8027
- Logps/chosen: -111.7147
- Logits/rejected: -0.2343
- Logits/chosen: -0.2007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0325 | 2.67 | 100 | 1.7152 | -6.7346 | -9.0742 | 0.6300 | 2.3396 | -124.5811 | -118.5607 | -0.0481 | -0.0442 |
| 0.0969 | 5.33 | 200 | 3.2390 | -15.5354 | -20.2430 | 0.6400 | 4.7075 | -136.9909 | -128.3395 | -0.4433 | -0.4023 |
| 0.0009 | 8.0 | 300 | 3.0813 | -4.4428 | -8.6631 | 0.6200 | 4.2203 | -124.1243 | -116.0144 | -0.5257 | -0.4694 |
| 0.0 | 10.67 | 400 | 2.6646 | -0.4885 | -5.6039 | 0.6300 | 5.1153 | -120.7252 | -111.6207 | -0.2334 | -0.1998 |
| 0.0 | 13.33 | 500 | 2.6520 | -0.5356 | -5.6689 | 0.6400 | 5.1333 | -120.7974 | -111.6730 | -0.2340 | -0.2002 |
| 0.0 | 16.0 | 600 | 2.6313 | -0.5224 | -5.6829 | 0.6300 | 5.1605 | -120.8130 | -111.6584 | -0.2338 | -0.2001 |
| 0.0 | 18.67 | 700 | 2.6655 | -0.5685 | -5.6921 | 0.6400 | 5.1236 | -120.8232 | -111.7095 | -0.2342 | -0.2004 |
| 0.0 | 21.33 | 800 | 2.6412 | -0.5360 | -5.6727 | 0.6400 | 5.1366 | -120.8016 | -111.6735 | -0.2342 | -0.2004 |
| 0.0 | 24.0 | 900 | 2.6564 | -0.5614 | -5.6738 | 0.6400 | 5.1124 | -120.8029 | -111.7017 | -0.2347 | -0.2010 |
| 0.0 | 26.67 | 1000 | 2.6846 | -0.5731 | -5.6737 | 0.6400 | 5.1005 | -120.8027 | -111.7147 | -0.2343 | -0.2007 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_usp2_dpo9", "results": []}]} | guoyu-zhang/model_hh_usp2_dpo9 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-16T20:44:20+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_hh\_usp2\_dpo9
=====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.6846
* Rewards/chosen: -0.5731
* Rewards/rejected: -5.6737
* Rewards/accuracies: 0.6400
* Rewards/margins: 5.1005
* Logps/rejected: -120.8027
* Logps/chosen: -111.7147
* Logits/rejected: -0.2343
* Logits/chosen: -0.2007
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
feature-extraction | transformers | # jina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564
## Model Description
jina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/florianhoenicke/jina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "jina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| {} | florianhoenicke/jina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564 | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:44:28+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #feature-extraction #endpoints_compatible #region-us
| # jina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564
## Model Description
jina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found here.
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
| [
"# jina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564",
"## Model Description\n\njina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain.",
"## Use Case\nThis model is designed to support various applications in natural language processing and understanding.",
"## Associated Dataset\n\nThis the dataset for this model can be found here.",
"## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] | [
"TAGS\n#transformers #safetensors #bert #feature-extraction #endpoints_compatible #region-us \n",
"# jina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564",
"## Model Description\n\njina-website-1-0-1-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain.",
"## Use Case\nThis model is designed to support various applications in natural language processing and understanding.",
"## Associated Dataset\n\nThis the dataset for this model can be found here.",
"## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] |
sentence-similarity | sentence-transformers |
# sbastola/muril-base-cased-sentence-transformer-snli-nepali-2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sbastola/muril-base-cased-sentence-transformer-snli-nepali-2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sbastola/muril-base-cased-sentence-transformer-snli-nepali-2')
model = AutoModel.from_pretrained('sbastola/muril-base-cased-sentence-transformer-snli-nepali-2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sbastola/muril-base-cased-sentence-transformer-snli-nepali-2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 859 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.LabelAccuracyEvaluator.LabelAccuracyEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 171,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["stanfordnlp/snli"], "pipeline_tag": "sentence-similarity"} | sbastola/muril-base-cased-sentence-transformer-snli-nepali-2 | null | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:stanfordnlp/snli",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:46:27+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-stanfordnlp/snli #endpoints_compatible #region-us
|
# sbastola/muril-base-cased-sentence-transformer-snli-nepali-2
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 859 with parameters:
Loss:
'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# sbastola/muril-base-cased-sentence-transformer-snli-nepali-2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 859 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-stanfordnlp/snli #endpoints_compatible #region-us \n",
"# sbastola/muril-base-cased-sentence-transformer-snli-nepali-2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 859 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.