pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-best-CREMA-sentiment-analysis
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Top2 Accuracy: 0.8940
- Loss: 0.8287
- Accuracy: 0.7074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Top2 Accuracy | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:-------------:|:---------------:|:--------:|
| 1.7824 | 0.98 | 43 | 0.4982 | 1.7749 | 0.2482 |
| 1.7115 | 1.99 | 87 | 0.5466 | 1.6566 | 0.3638 |
| 1.5255 | 2.99 | 131 | 0.6604 | 1.5017 | 0.4418 |
| 1.3716 | 4.0 | 175 | 0.7679 | 1.3359 | 0.5636 |
| 1.2436 | 4.98 | 218 | 0.8271 | 1.1862 | 0.6407 |
| 1.1366 | 5.99 | 262 | 0.8315 | 1.1223 | 0.6595 |
| 1.0322 | 6.99 | 306 | 0.8593 | 1.0422 | 0.6747 |
| 0.9668 | 8.0 | 350 | 0.8907 | 0.9335 | 0.7222 |
| 0.8932 | 8.98 | 393 | 0.8943 | 0.9093 | 0.7231 |
| 0.8431 | 9.99 | 437 | 0.8692 | 0.9163 | 0.7115 |
| 0.8047 | 10.99 | 481 | 0.8996 | 0.8488 | 0.7375 |
| 0.7444 | 12.0 | 525 | 0.8898 | 0.8611 | 0.7204 |
| 0.6921 | 12.98 | 568 | 0.8916 | 0.8399 | 0.7258 |
| 0.6973 | 13.99 | 612 | 0.8844 | 0.8425 | 0.7231 |
| 0.632 | 14.99 | 656 | 0.8880 | 0.8308 | 0.7249 |
| 0.6275 | 16.0 | 700 | 0.8862 | 0.8400 | 0.7177 |
| 0.6153 | 16.98 | 743 | 0.8934 | 0.8266 | 0.7330 |
| 0.5597 | 17.99 | 787 | 0.8934 | 0.8157 | 0.7357 |
| 0.5658 | 18.99 | 831 | 0.8862 | 0.8015 | 0.7446 |
| 0.54 | 20.0 | 875 | 0.8943 | 0.8368 | 0.7258 |
| 0.5301 | 20.98 | 918 | 0.9023 | 0.8095 | 0.7321 |
| 0.5262 | 21.99 | 962 | 0.8817 | 0.8521 | 0.7168 |
| 0.4754 | 22.99 | 1006 | 0.8987 | 0.8003 | 0.7428 |
| 0.4753 | 24.0 | 1050 | 0.8952 | 0.7988 | 0.7410 |
| 0.455 | 24.98 | 1093 | 0.8952 | 0.7902 | 0.7419 |
| 0.4574 | 25.99 | 1137 | 0.8871 | 0.8030 | 0.7366 |
| 0.4618 | 26.99 | 1181 | 0.8970 | 0.8051 | 0.7294 |
| 0.4222 | 28.0 | 1225 | 0.8925 | 0.8108 | 0.7267 |
| 0.4301 | 28.98 | 1268 | 0.8934 | 0.8066 | 0.7339 |
| 0.4147 | 29.49 | 1290 | 0.8916 | 0.8072 | 0.7357 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "wav2vec-best-CREMA-sentiment-analysis", "results": []}]} | Supreeta03/wav2vec2-base-CREMAD-sentiment-analysis | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T05:51:58+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec-best-CREMA-sentiment-analysis
=====================================
This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Top2 Accuracy: 0.8940
* Loss: 0.8287
* Accuracy: 0.7074
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | chiangcw/zephyr-7b-beta-Agent-Instruct_e1 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T05:52:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ryanyeo/kirnect-koalpaca-polyglot-5.8B-food | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T05:54:47+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised-GGUF
Original model: [LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised-GGUF](https://huggingface.co/McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised)
Use llama.cpp's conversion and quantization scripts. | {} | gaianet/LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised-GGUF | null | [
"gguf",
"region:us"
] | null | 2024-04-18T05:54:55+00:00 | [] | [] | TAGS
#gguf #region-us
| LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised-GGUF
Original model: LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised-GGUF
Use URL's conversion and quantization scripts. | [] | [
"TAGS\n#gguf #region-us \n"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2963
- Accuracy: 0.9093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4987 | 1.0 | 86 | 0.4083 | 0.8693 |
| 0.3837 | 2.0 | 172 | 0.4003 | 0.8611 |
| 0.3595 | 3.0 | 258 | 0.2963 | 0.9093 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-eurosat", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9093137254901961, "name": "Accuracy"}]}]}]} | ipurwadi/swin-tiny-patch4-window7-224-finetuned-eurosat | null | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T05:55:57+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| swin-tiny-patch4-window7-224-finetuned-eurosat
==============================================
This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2963
* Accuracy: 0.9093
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.2
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Intent-classification-BERT-Large-Ashuv3
This model is a fine-tuned version of [google-bert/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2610
- Accuracy: 0.8951
- F1: 0.8807
- Precision: 0.8812
- Recall: 0.8820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6762 | 0.24 | 10 | 1.3120 | 0.5280 | 0.4993 | 0.6178 | 0.5370 |
| 0.9717 | 0.49 | 20 | 0.7487 | 0.8571 | 0.8402 | 0.8670 | 0.8455 |
| 0.6375 | 0.73 | 30 | 0.4393 | 0.8509 | 0.8479 | 0.8862 | 0.8548 |
| 0.4006 | 0.98 | 40 | 0.2427 | 0.9068 | 0.9005 | 0.9228 | 0.9075 |
| 0.2291 | 1.22 | 50 | 0.1875 | 0.9068 | 0.8940 | 0.9106 | 0.8902 |
| 0.2634 | 1.46 | 60 | 0.2204 | 0.9068 | 0.8977 | 0.9135 | 0.9051 |
| 0.1916 | 1.71 | 70 | 0.1730 | 0.9130 | 0.9053 | 0.9232 | 0.9123 |
| 0.1881 | 1.95 | 80 | 0.1676 | 0.9130 | 0.9051 | 0.9232 | 0.9133 |
| 0.2692 | 2.2 | 90 | 0.1728 | 0.9068 | 0.8958 | 0.9423 | 0.8790 |
| 0.1425 | 2.44 | 100 | 0.1757 | 0.9068 | 0.8958 | 0.9423 | 0.8790 |
| 0.2674 | 2.68 | 110 | 0.3307 | 0.8758 | 0.8608 | 0.8756 | 0.8713 |
| 0.2385 | 2.93 | 120 | 0.1878 | 0.9006 | 0.8901 | 0.9059 | 0.8988 |
| 0.1868 | 3.17 | 130 | 0.1679 | 0.9130 | 0.9027 | 0.9147 | 0.9097 |
| 0.2281 | 3.41 | 140 | 0.1796 | 0.9130 | 0.9057 | 0.9274 | 0.9133 |
| 0.1459 | 3.66 | 150 | 0.1982 | 0.9068 | 0.8960 | 0.9077 | 0.9049 |
| 0.161 | 3.9 | 160 | 0.2266 | 0.8944 | 0.8772 | 0.9012 | 0.8765 |
| 0.1441 | 4.15 | 170 | 0.2062 | 0.8944 | 0.8889 | 0.9115 | 0.8935 |
| 0.172 | 4.39 | 180 | 0.2208 | 0.9006 | 0.8922 | 0.9216 | 0.8988 |
| 0.1365 | 4.63 | 190 | 0.2088 | 0.9068 | 0.8974 | 0.9244 | 0.9045 |
| 0.1795 | 4.88 | 200 | 0.2011 | 0.8820 | 0.8682 | 0.8936 | 0.8569 |
| 0.204 | 5.12 | 210 | 0.2377 | 0.8820 | 0.8642 | 0.8656 | 0.8721 |
| 0.1409 | 5.37 | 220 | 0.2178 | 0.8944 | 0.8852 | 0.9003 | 0.8776 |
| 0.1771 | 5.61 | 230 | 0.2284 | 0.8758 | 0.8624 | 0.8871 | 0.8511 |
| 0.1926 | 5.85 | 240 | 0.2211 | 0.8944 | 0.8815 | 0.8990 | 0.8761 |
| 0.2142 | 6.1 | 250 | 0.2217 | 0.9193 | 0.9082 | 0.9306 | 0.9130 |
| 0.1125 | 6.34 | 260 | 0.2321 | 0.9006 | 0.8889 | 0.9420 | 0.8702 |
| 0.1473 | 6.59 | 270 | 0.2129 | 0.9130 | 0.9057 | 0.9274 | 0.9133 |
| 0.1468 | 6.83 | 280 | 0.2318 | 0.9130 | 0.9057 | 0.9274 | 0.9133 |
| 0.1951 | 7.07 | 290 | 0.1957 | 0.9006 | 0.8879 | 0.9061 | 0.8788 |
| 0.1659 | 7.32 | 300 | 0.1961 | 0.9006 | 0.8872 | 0.9143 | 0.8752 |
| 0.1265 | 7.56 | 310 | 0.2058 | 0.9130 | 0.9049 | 0.9226 | 0.9097 |
| 0.1774 | 7.8 | 320 | 0.2223 | 0.9068 | 0.8974 | 0.9244 | 0.9045 |
| 0.2609 | 8.05 | 330 | 0.2218 | 0.8944 | 0.8833 | 0.8906 | 0.8811 |
| 0.1079 | 8.29 | 340 | 0.3312 | 0.8820 | 0.8675 | 0.8672 | 0.8680 |
| 0.1729 | 8.54 | 350 | 0.3627 | 0.8696 | 0.8500 | 0.8540 | 0.8554 |
| 0.2337 | 8.78 | 360 | 0.2526 | 0.9006 | 0.8872 | 0.9143 | 0.8752 |
| 0.1573 | 9.02 | 370 | 0.2072 | 0.9130 | 0.9049 | 0.9226 | 0.9097 |
| 0.1843 | 9.27 | 380 | 0.2605 | 0.9068 | 0.8991 | 0.9210 | 0.9085 |
| 0.1521 | 9.51 | 390 | 0.2695 | 0.9006 | 0.8920 | 0.9081 | 0.8966 |
| 0.193 | 9.76 | 400 | 0.3340 | 0.9130 | 0.9039 | 0.9187 | 0.9061 |
| 0.1034 | 10.0 | 410 | 0.3391 | 0.9068 | 0.8948 | 0.9025 | 0.9049 |
| 0.1348 | 10.24 | 420 | 0.3377 | 0.9006 | 0.8902 | 0.8998 | 0.8930 |
| 0.0856 | 10.49 | 430 | 0.3274 | 0.8882 | 0.8768 | 0.8920 | 0.8692 |
| 0.1877 | 10.73 | 440 | 0.3401 | 0.8696 | 0.8498 | 0.8504 | 0.8514 |
| 0.1775 | 10.98 | 450 | 0.4162 | 0.8882 | 0.8708 | 0.8716 | 0.8799 |
| 0.1357 | 11.22 | 460 | 0.3992 | 0.8820 | 0.8652 | 0.8622 | 0.8716 |
| 0.0878 | 11.46 | 470 | 0.3920 | 0.8944 | 0.8803 | 0.8772 | 0.8883 |
| 0.1892 | 11.71 | 480 | 0.3148 | 0.8696 | 0.8499 | 0.8472 | 0.8549 |
| 0.1712 | 11.95 | 490 | 0.3028 | 0.8758 | 0.8589 | 0.8585 | 0.8597 |
| 0.0914 | 12.2 | 500 | 0.3450 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.1793 | 12.44 | 510 | 0.3617 | 0.8882 | 0.8758 | 0.8872 | 0.8692 |
| 0.1355 | 12.68 | 520 | 0.4130 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.1518 | 12.93 | 530 | 0.5015 | 0.8944 | 0.8798 | 0.8808 | 0.8878 |
| 0.1778 | 13.17 | 540 | 0.3596 | 0.8882 | 0.8716 | 0.8709 | 0.8804 |
| 0.1662 | 13.41 | 550 | 0.3716 | 0.9006 | 0.8864 | 0.8868 | 0.8930 |
| 0.1105 | 13.66 | 560 | 0.3452 | 0.9006 | 0.8874 | 0.8903 | 0.8966 |
| 0.1369 | 13.9 | 570 | 0.3606 | 0.8944 | 0.8807 | 0.8824 | 0.8883 |
| 0.2051 | 14.15 | 580 | 0.3497 | 0.8882 | 0.8750 | 0.8784 | 0.8728 |
| 0.1441 | 14.39 | 590 | 0.4031 | 0.8820 | 0.8664 | 0.8649 | 0.8680 |
| 0.1586 | 14.63 | 600 | 0.3853 | 0.8820 | 0.8664 | 0.8649 | 0.8680 |
| 0.0974 | 14.88 | 610 | 0.4037 | 0.8820 | 0.8664 | 0.8649 | 0.8680 |
| 0.0799 | 15.12 | 620 | 0.5252 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.0969 | 15.37 | 630 | 0.5702 | 0.8820 | 0.8691 | 0.8699 | 0.8716 |
| 0.1664 | 15.61 | 640 | 0.5281 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.175 | 15.85 | 650 | 0.4865 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.1904 | 16.1 | 660 | 0.3893 | 0.8696 | 0.8528 | 0.8520 | 0.8549 |
| 0.1054 | 16.34 | 670 | 0.4320 | 0.8758 | 0.8612 | 0.8636 | 0.8597 |
| 0.1657 | 16.59 | 680 | 0.5669 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.1089 | 16.83 | 690 | 0.5642 | 0.8820 | 0.8677 | 0.8649 | 0.8716 |
| 0.0831 | 17.07 | 700 | 0.4782 | 0.8820 | 0.8709 | 0.8744 | 0.8716 |
| 0.1518 | 17.32 | 710 | 0.5122 | 0.8820 | 0.8695 | 0.8720 | 0.8680 |
| 0.1203 | 17.56 | 720 | 0.5720 | 0.8820 | 0.8695 | 0.8720 | 0.8680 |
| 0.1185 | 17.8 | 730 | 0.5798 | 0.8820 | 0.8698 | 0.8703 | 0.8716 |
| 0.1065 | 18.05 | 740 | 0.5495 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.13 | 18.29 | 750 | 0.6271 | 0.8820 | 0.8687 | 0.8696 | 0.8716 |
| 0.1382 | 18.54 | 760 | 0.6307 | 0.8758 | 0.8585 | 0.8556 | 0.8633 |
| 0.0979 | 18.78 | 770 | 0.6167 | 0.8758 | 0.8585 | 0.8556 | 0.8633 |
| 0.1328 | 19.02 | 780 | 0.6011 | 0.8758 | 0.8585 | 0.8556 | 0.8633 |
| 0.1561 | 19.27 | 790 | 0.5938 | 0.8696 | 0.8517 | 0.8495 | 0.8549 |
| 0.1638 | 19.51 | 800 | 0.6397 | 0.8696 | 0.8528 | 0.8520 | 0.8549 |
| 0.1358 | 19.76 | 810 | 0.6917 | 0.8758 | 0.8614 | 0.8649 | 0.8597 |
| 0.1298 | 20.0 | 820 | 0.6769 | 0.8696 | 0.8528 | 0.8489 | 0.8585 |
| 0.1102 | 20.24 | 830 | 0.6891 | 0.8758 | 0.8610 | 0.8594 | 0.8669 |
| 0.127 | 20.49 | 840 | 0.6950 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.1719 | 20.73 | 850 | 0.6719 | 0.8882 | 0.8754 | 0.8773 | 0.8799 |
| 0.1503 | 20.98 | 860 | 0.6462 | 0.8820 | 0.8675 | 0.8666 | 0.8716 |
| 0.1118 | 21.22 | 870 | 0.6405 | 0.8820 | 0.8690 | 0.8705 | 0.8680 |
| 0.0991 | 21.46 | 880 | 0.6492 | 0.8758 | 0.8614 | 0.8600 | 0.8633 |
| 0.1288 | 21.71 | 890 | 0.7045 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.1414 | 21.95 | 900 | 0.7439 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.1744 | 22.2 | 910 | 0.7353 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.1072 | 22.44 | 920 | 0.7524 | 0.8820 | 0.8688 | 0.8705 | 0.8680 |
| 0.0931 | 22.68 | 930 | 0.7671 | 0.8758 | 0.8614 | 0.8649 | 0.8597 |
| 0.0775 | 22.93 | 940 | 0.7442 | 0.8758 | 0.8614 | 0.8649 | 0.8597 |
| 0.0713 | 23.17 | 950 | 0.7456 | 0.8758 | 0.8614 | 0.8649 | 0.8597 |
| 0.1027 | 23.41 | 960 | 0.7528 | 0.8820 | 0.8664 | 0.8649 | 0.8680 |
| 0.1163 | 23.66 | 970 | 0.7503 | 0.8820 | 0.8664 | 0.8649 | 0.8680 |
| 0.1067 | 23.9 | 980 | 0.7359 | 0.8758 | 0.8622 | 0.8660 | 0.8597 |
| 0.0955 | 24.15 | 990 | 0.7457 | 0.8820 | 0.8676 | 0.8687 | 0.8680 |
| 0.0874 | 24.39 | 1000 | 0.7663 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.0865 | 24.63 | 1010 | 0.7761 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.1378 | 24.88 | 1020 | 0.7761 | 0.8820 | 0.8691 | 0.8699 | 0.8716 |
| 0.1411 | 25.12 | 1030 | 0.7714 | 0.8820 | 0.8676 | 0.8687 | 0.8680 |
| 0.1034 | 25.37 | 1040 | 0.7662 | 0.8820 | 0.8685 | 0.8700 | 0.8680 |
| 0.0709 | 25.61 | 1050 | 0.7720 | 0.8820 | 0.8670 | 0.8681 | 0.8680 |
| 0.1286 | 25.85 | 1060 | 0.7809 | 0.8820 | 0.8670 | 0.8681 | 0.8680 |
| 0.1191 | 26.1 | 1070 | 0.7861 | 0.8820 | 0.8676 | 0.8687 | 0.8680 |
| 0.0902 | 26.34 | 1080 | 0.7888 | 0.8820 | 0.8691 | 0.8699 | 0.8716 |
| 0.1054 | 26.59 | 1090 | 0.7894 | 0.8820 | 0.8698 | 0.8703 | 0.8716 |
| 0.1142 | 26.83 | 1100 | 0.7914 | 0.8820 | 0.8691 | 0.8699 | 0.8716 |
| 0.1175 | 27.07 | 1110 | 0.7923 | 0.8820 | 0.8691 | 0.8699 | 0.8716 |
| 0.1319 | 27.32 | 1120 | 0.7938 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.1181 | 27.56 | 1130 | 0.7967 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.0858 | 27.8 | 1140 | 0.8003 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.0697 | 28.05 | 1150 | 0.8025 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.0644 | 28.29 | 1160 | 0.8050 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.1123 | 28.54 | 1170 | 0.8063 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.0998 | 28.78 | 1180 | 0.8078 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.1297 | 29.02 | 1190 | 0.8095 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.1133 | 29.27 | 1200 | 0.8094 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.1122 | 29.51 | 1210 | 0.8095 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.1115 | 29.76 | 1220 | 0.8096 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
| 0.0692 | 30.0 | 1230 | 0.8095 | 0.8820 | 0.8685 | 0.8701 | 0.8716 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "google-bert/bert-large-uncased", "model-index": [{"name": "Intent-classification-BERT-Large-Ashuv3", "results": []}]} | Narkantak/Intent-classification-BERT-Large-Ashuv3 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T05:58:56+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-large-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Intent-classification-BERT-Large-Ashuv3
=======================================
This model is a fine-tuned version of google-bert/bert-large-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2610
* Accuracy: 0.8951
* F1: 0.8807
* Precision: 0.8812
* Recall: 0.8820
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.1.2
* Datasets 2.1.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-large-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
#no parameters necessary for base model
- model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
density: 0.5
weight: 0.5
- model: BioMistral/BioMistral-7B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: false
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["mistralai/Mistral-7B-v0.1", "mistralai/Mistral-7B-Instruct-v0.2", "BioMistral/BioMistral-7B"]} | mergekit-community/mergekit-ties-itmchpd | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:BioMistral/BioMistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T05:59:37+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-mistralai/Mistral-7B-v0.1 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-BioMistral/BioMistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using mistralai/Mistral-7B-v0.1 as a base.
### Models Merged
The following models were included in the merge:
* mistralai/Mistral-7B-Instruct-v0.2
* BioMistral/BioMistral-7B
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using mistralai/Mistral-7B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.2\n* BioMistral/BioMistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-mistralai/Mistral-7B-v0.1 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-BioMistral/BioMistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using mistralai/Mistral-7B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.2\n* BioMistral/BioMistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
reinforcement-learning | stable-baselines3 |
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "ppo", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "256.19 +/- 21.99", "name": "mean_reward", "verified": false}]}]}]} | jrcp98/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-18T06:04:06+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# ppo Agent playing LunarLander-v2
This is a trained model of a ppo agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# ppo Agent playing LunarLander-v2\nThis is a trained model of a ppo agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# ppo Agent playing LunarLander-v2\nThis is a trained model of a ppo agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": ["trl", "dpo"]} | appvoid/instruct-palmer-003-beta-4 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-18T06:05:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #trl #dpo #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #trl #dpo #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | SOUMYADEEPSAR/convbert_polbias | null | [
"transformers",
"safetensors",
"convbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:08:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #convbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #convbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | faizahmp/finetune_indobert_v4 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:12:44+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** DattaBS
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-2-7b-hf
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "meta-llama/Llama-2-7b-hf"} | DattaBS/llama7b_NonQuant-SFT_polarity50 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:meta-llama/Llama-2-7b-hf",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:15:12+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-meta-llama/Llama-2-7b-hf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: DattaBS
- License: apache-2.0
- Finetuned from model : meta-llama/Llama-2-7b-hf
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: DattaBS\n- License: apache-2.0\n- Finetuned from model : meta-llama/Llama-2-7b-hf\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-meta-llama/Llama-2-7b-hf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: DattaBS\n- License: apache-2.0\n- Finetuned from model : meta-llama/Llama-2-7b-hf\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | chiangcw/zephyr-7b-beta-Agent-Instruct_e3 | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:15:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ekle-me/gemma-Code-Instruct-Finetune-test-105 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:19:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # Test
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) + [jeiku/Synthetic_Soul_1k_Mistral_128](https://huggingface.co/jeiku/Synthetic_Soul_1k_Mistral_128)
* [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) + [jeiku/Theory_of_Mind_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Mistral)
* [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) + [monsterapi/mistral_7b_norobots](https://huggingface.co/monsterapi/mistral_7b_norobots)
* [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) + [monsterapi/mistral_7b_WizardLMEvolInstruct70k](https://huggingface.co/monsterapi/mistral_7b_WizardLMEvolInstruct70k)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jeiku/Zephyr_beta_32k_7B+monsterapi/mistral_7b_WizardLMEvolInstruct70k
- model: jeiku/Zephyr_beta_32k_7B+jeiku/Synthetic_Soul_1k_Mistral_128
- model: jeiku/Zephyr_beta_32k_7B+jeiku/Theory_of_Mind_Mistral
- model: jeiku/Zephyr_beta_32k_7B+monsterapi/mistral_7b_norobots
merge_method: model_stock
base_model: jeiku/Zephyr_beta_32k_7B
dtype: bfloat16
``` | {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["jeiku/Zephyr_beta_32k_7B", "jeiku/Synthetic_Soul_1k_Mistral_128", "jeiku/Zephyr_beta_32k_7B", "jeiku/Theory_of_Mind_Mistral", "jeiku/Zephyr_beta_32k_7B", "monsterapi/mistral_7b_norobots", "jeiku/Zephyr_beta_32k_7B", "jeiku/Zephyr_beta_32k_7B", "monsterapi/mistral_7b_WizardLMEvolInstruct70k"]} | jeiku/32kTest_7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:jeiku/Zephyr_beta_32k_7B",
"base_model:jeiku/Synthetic_Soul_1k_Mistral_128",
"base_model:jeiku/Theory_of_Mind_Mistral",
"base_model:monsterapi/mistral_7b_norobots",
"base_model:monsterapi/mistral_7b_WizardLMEvolInstruct70k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:20:13+00:00 | [
"2403.19522"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2403.19522 #base_model-jeiku/Zephyr_beta_32k_7B #base_model-jeiku/Synthetic_Soul_1k_Mistral_128 #base_model-jeiku/Theory_of_Mind_Mistral #base_model-monsterapi/mistral_7b_norobots #base_model-monsterapi/mistral_7b_WizardLMEvolInstruct70k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Test
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the Model Stock merge method using jeiku/Zephyr_beta_32k_7B as a base.
### Models Merged
The following models were included in the merge:
* jeiku/Zephyr_beta_32k_7B + jeiku/Synthetic_Soul_1k_Mistral_128
* jeiku/Zephyr_beta_32k_7B + jeiku/Theory_of_Mind_Mistral
* jeiku/Zephyr_beta_32k_7B + monsterapi/mistral_7b_norobots
* jeiku/Zephyr_beta_32k_7B + monsterapi/mistral_7b_WizardLMEvolInstruct70k
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Test\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using jeiku/Zephyr_beta_32k_7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* jeiku/Zephyr_beta_32k_7B + jeiku/Synthetic_Soul_1k_Mistral_128\n* jeiku/Zephyr_beta_32k_7B + jeiku/Theory_of_Mind_Mistral\n* jeiku/Zephyr_beta_32k_7B + monsterapi/mistral_7b_norobots\n* jeiku/Zephyr_beta_32k_7B + monsterapi/mistral_7b_WizardLMEvolInstruct70k",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2403.19522 #base_model-jeiku/Zephyr_beta_32k_7B #base_model-jeiku/Synthetic_Soul_1k_Mistral_128 #base_model-jeiku/Theory_of_Mind_Mistral #base_model-monsterapi/mistral_7b_norobots #base_model-monsterapi/mistral_7b_WizardLMEvolInstruct70k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using jeiku/Zephyr_beta_32k_7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* jeiku/Zephyr_beta_32k_7B + jeiku/Synthetic_Soul_1k_Mistral_128\n* jeiku/Zephyr_beta_32k_7B + jeiku/Theory_of_Mind_Mistral\n* jeiku/Zephyr_beta_32k_7B + monsterapi/mistral_7b_norobots\n* jeiku/Zephyr_beta_32k_7B + monsterapi/mistral_7b_WizardLMEvolInstruct70k",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | null |
# SabbatH 2x7B
<img src="https://huggingface.co/Elizezen/SabbatH-2x7B/resolve/main/OIG4.jpg" alt="drawing" style="width:512px;"/>
## Model Description
SabbatH 2x7B is a Japanese language model that has been created by combining two models: [Antler-RP-ja-westlake-chatvector](https://huggingface.co/soramikaduki/Antler-RP-ja-westlake-chatvector) and [Hameln-japanese-mistral-7B](https://huggingface.co/Elizezen/Hameln-japanese-mistral-7B), using a Mixture of Experts (MoE) approach. It also used [chatntq-ja-7b-v1.0](https://huggingface.co/NTQAI/chatntq-ja-7b-v1.0) as a base model.
## Intended Use
The primary purpose of this language model is to assist in generating novels. While it can handle various prompts, it may not excel in providing instruction-based responses. Note that the model's responses are not censored, and occasionally sensitive content may be generated. | {"language": ["ja"], "license": "apache-2.0", "tags": ["causal-lm", "not-for-all-audiences", "nsfw"], "pipeline_tag": "text-generation"} | Elizezen/SabbatH-2x7B-GGUF | null | [
"gguf",
"causal-lm",
"not-for-all-audiences",
"nsfw",
"text-generation",
"ja",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T06:20:21+00:00 | [] | [
"ja"
] | TAGS
#gguf #causal-lm #not-for-all-audiences #nsfw #text-generation #ja #license-apache-2.0 #region-us
|
# SabbatH 2x7B
<img src="URL alt="drawing" style="width:512px;"/>
## Model Description
SabbatH 2x7B is a Japanese language model that has been created by combining two models: Antler-RP-ja-westlake-chatvector and Hameln-japanese-mistral-7B, using a Mixture of Experts (MoE) approach. It also used chatntq-ja-7b-v1.0 as a base model.
## Intended Use
The primary purpose of this language model is to assist in generating novels. While it can handle various prompts, it may not excel in providing instruction-based responses. Note that the model's responses are not censored, and occasionally sensitive content may be generated. | [
"# SabbatH 2x7B\n\n<img src=\"URL alt=\"drawing\" style=\"width:512px;\"/>",
"## Model Description\n\nSabbatH 2x7B is a Japanese language model that has been created by combining two models: Antler-RP-ja-westlake-chatvector and Hameln-japanese-mistral-7B, using a Mixture of Experts (MoE) approach. It also used chatntq-ja-7b-v1.0 as a base model.",
"## Intended Use\n\nThe primary purpose of this language model is to assist in generating novels. While it can handle various prompts, it may not excel in providing instruction-based responses. Note that the model's responses are not censored, and occasionally sensitive content may be generated."
] | [
"TAGS\n#gguf #causal-lm #not-for-all-audiences #nsfw #text-generation #ja #license-apache-2.0 #region-us \n",
"# SabbatH 2x7B\n\n<img src=\"URL alt=\"drawing\" style=\"width:512px;\"/>",
"## Model Description\n\nSabbatH 2x7B is a Japanese language model that has been created by combining two models: Antler-RP-ja-westlake-chatvector and Hameln-japanese-mistral-7B, using a Mixture of Experts (MoE) approach. It also used chatntq-ja-7b-v1.0 as a base model.",
"## Intended Use\n\nThe primary purpose of this language model is to assist in generating novels. While it can handle various prompts, it may not excel in providing instruction-based responses. Note that the model's responses are not censored, and occasionally sensitive content may be generated."
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-llama-lora-no-grad
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7206
- Accuracy: 0.8164
- Precision: 0.8231
- Recall: 0.8164
- Precision Macro: 0.7396
- Recall Macro: 0.7117
- Macro Fpr: 0.0159
- Weighted Fpr: 0.0152
- Weighted Specificity: 0.9752
- Macro Specificity: 0.9865
- Weighted Sensitivity: 0.8226
- Macro Sensitivity: 0.7117
- F1 Micro: 0.8226
- F1 Macro: 0.7177
- F1 Weighted: 0.8190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
| 1.1276 | 1.0 | 643 | 0.6705 | 0.8087 | 0.8055 | 0.8087 | 0.7053 | 0.6853 | 0.0172 | 0.0166 | 0.9742 | 0.9855 | 0.8087 | 0.6853 | 0.8087 | 0.6806 | 0.8034 |
| 0.503 | 2.0 | 1286 | 0.7206 | 0.8164 | 0.8231 | 0.8164 | 0.7746 | 0.7641 | 0.0163 | 0.0158 | 0.9773 | 0.9862 | 0.8164 | 0.7641 | 0.8164 | 0.7610 | 0.8154 |
| 0.3617 | 3.0 | 1929 | 0.8819 | 0.8164 | 0.8137 | 0.8164 | 0.7499 | 0.7170 | 0.0164 | 0.0158 | 0.9752 | 0.9861 | 0.8164 | 0.7170 | 0.8164 | 0.7242 | 0.8124 |
| 0.0618 | 4.0 | 2572 | 1.1434 | 0.8087 | 0.8107 | 0.8087 | 0.7673 | 0.7293 | 0.0173 | 0.0166 | 0.9727 | 0.9854 | 0.8087 | 0.7293 | 0.8087 | 0.7401 | 0.8074 |
| 0.0243 | 5.0 | 3215 | 1.2966 | 0.8110 | 0.8112 | 0.8110 | 0.7489 | 0.7164 | 0.0171 | 0.0164 | 0.9754 | 0.9858 | 0.8110 | 0.7164 | 0.8110 | 0.7228 | 0.8086 |
| 0.0121 | 6.0 | 3858 | 1.2965 | 0.8195 | 0.8175 | 0.8195 | 0.7312 | 0.7077 | 0.0162 | 0.0155 | 0.9752 | 0.9863 | 0.8195 | 0.7077 | 0.8195 | 0.7143 | 0.8170 |
| 0.0021 | 7.0 | 4501 | 1.3710 | 0.8187 | 0.8168 | 0.8187 | 0.7519 | 0.7112 | 0.0162 | 0.0156 | 0.9756 | 0.9863 | 0.8187 | 0.7112 | 0.8187 | 0.7165 | 0.8152 |
| 0.003 | 8.0 | 5144 | 1.3348 | 0.8203 | 0.8171 | 0.8203 | 0.7417 | 0.7073 | 0.0162 | 0.0154 | 0.9749 | 0.9863 | 0.8203 | 0.7073 | 0.8203 | 0.7159 | 0.8173 |
| 0.0023 | 9.0 | 5787 | 1.4038 | 0.8187 | 0.8149 | 0.8187 | 0.7548 | 0.7030 | 0.0163 | 0.0156 | 0.9742 | 0.9862 | 0.8187 | 0.7030 | 0.8187 | 0.7121 | 0.8141 |
| 0.0033 | 10.0 | 6430 | 1.4021 | 0.8203 | 0.8151 | 0.8203 | 0.7330 | 0.7110 | 0.0162 | 0.0154 | 0.9746 | 0.9863 | 0.8203 | 0.7110 | 0.8203 | 0.7152 | 0.8163 |
| 0.0017 | 11.0 | 7073 | 1.4001 | 0.8211 | 0.8178 | 0.8211 | 0.7361 | 0.7110 | 0.0160 | 0.0153 | 0.9753 | 0.9864 | 0.8211 | 0.7110 | 0.8211 | 0.7155 | 0.8179 |
| 0.0023 | 12.0 | 7716 | 1.4100 | 0.8226 | 0.8189 | 0.8226 | 0.7386 | 0.7127 | 0.0158 | 0.0152 | 0.9754 | 0.9865 | 0.8226 | 0.7127 | 0.8226 | 0.7177 | 0.8195 |
| 0.0034 | 13.0 | 8359 | 1.4273 | 0.8234 | 0.8192 | 0.8234 | 0.7385 | 0.7115 | 0.0158 | 0.0151 | 0.9757 | 0.9866 | 0.8234 | 0.7115 | 0.8234 | 0.7171 | 0.8201 |
| 0.0016 | 14.0 | 9002 | 1.4322 | 0.8226 | 0.8183 | 0.8226 | 0.7382 | 0.7111 | 0.0159 | 0.0152 | 0.9754 | 0.9865 | 0.8226 | 0.7111 | 0.8226 | 0.7168 | 0.8192 |
| 0.0006 | 15.0 | 9645 | 1.4401 | 0.8226 | 0.8178 | 0.8226 | 0.7396 | 0.7117 | 0.0159 | 0.0152 | 0.9752 | 0.9865 | 0.8226 | 0.7117 | 0.8226 | 0.7177 | 0.8190 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "tiny-llama-lora-no-grad", "results": []}]} | xshubhamx/tiny-llama-lora-no-grad | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T06:23:18+00:00 | [] | [] | TAGS
#tensorboard #safetensors #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
| tiny-llama-lora-no-grad
=======================
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7206
* Accuracy: 0.8164
* Precision: 0.8231
* Recall: 0.8164
* Precision Macro: 0.7396
* Recall Macro: 0.7117
* Macro Fpr: 0.0159
* Weighted Fpr: 0.0152
* Weighted Specificity: 0.9752
* Macro Specificity: 0.9865
* Weighted Sensitivity: 0.8226
* Macro Sensitivity: 0.7117
* F1 Micro: 0.8226
* F1 Macro: 0.7177
* F1 Weighted: 0.8190
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.35.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] | [
"TAGS\n#tensorboard #safetensors #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_ian-022_IMDB_n-its-3
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_ian-022_IMDB_n-its-3", "results": []}]} | AlignmentResearch/robust_llm_pythia-160m_ian-022_IMDB_n-its-3 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:24:59+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-160m_ian-022_IMDB_n-its-3
This model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-160m_ian-022_IMDB_n-its-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-160m_ian-022_IMDB_n-its-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
MistralAI 7B model fine-tuned for 1 epoch on Dataricks instruction tuning dataset.
## Model Details
### Model Description
- **Developed by:** Andrew Chahnwoo Park
- **Model type:** [Mistral](https://arxiv.org/pdf/2310.06825.pdf)
- **Language(s) (NLP):** English
- **License:** apache-2.0
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Mistral Repository
- **Repository:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
## Training Details
### Training Data
- [databricks/databricks-dolly-15k]('https://huggingface.co/datasets/databricks/databricks-dolly-15k')
### Training Procedure
- Quantized Low-Rank Adaptation (QLoRA)
- Transformers Trainer
- DataCollatorForSeq2Seq
- Distributed Data Parallel (DDP) across two GPUs
#### Preprocessing
Manually created tokenized 'labels' for the dataset.
Prompt template utilized basic template for instruction-tuning
### Hardware
Performed fine-tuning with 2 * A100 GPUs
- Provided by Gnewsoft during work period
Model and dataset are too large for free run sessions on Google Colab
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["databricks/databricks-dolly-15k"], "pipeline_tag": "text-generation"} | Chahnwoo/Mistral-7B-v0.1-1E-QLoRA-SFT-Test | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:26:43+00:00 | [
"2310.06825"
] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #en #dataset-databricks/databricks-dolly-15k #arxiv-2310.06825 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
MistralAI 7B model fine-tuned for 1 epoch on Dataricks instruction tuning dataset.
## Model Details
### Model Description
- Developed by: Andrew Chahnwoo Park
- Model type: Mistral
- Language(s) (NLP): English
- License: apache-2.0
- Finetuned from model: mistralai/Mistral-7B-v0.1
### Mistral Repository
- Repository: mistralai/Mistral-7B-v0.1
## Training Details
### Training Data
- databricks/databricks-dolly-15k
### Training Procedure
- Quantized Low-Rank Adaptation (QLoRA)
- Transformers Trainer
- DataCollatorForSeq2Seq
- Distributed Data Parallel (DDP) across two GPUs
#### Preprocessing
Manually created tokenized 'labels' for the dataset.
Prompt template utilized basic template for instruction-tuning
### Hardware
Performed fine-tuning with 2 * A100 GPUs
- Provided by Gnewsoft during work period
Model and dataset are too large for free run sessions on Google Colab
| [
"# Model Card for Model ID\n\nMistralAI 7B model fine-tuned for 1 epoch on Dataricks instruction tuning dataset.",
"## Model Details",
"### Model Description\n\n- Developed by: Andrew Chahnwoo Park\n- Model type: Mistral\n- Language(s) (NLP): English\n- License: apache-2.0\n- Finetuned from model: mistralai/Mistral-7B-v0.1",
"### Mistral Repository\n\n- Repository: mistralai/Mistral-7B-v0.1",
"## Training Details",
"### Training Data\n\n- databricks/databricks-dolly-15k",
"### Training Procedure\n\n- Quantized Low-Rank Adaptation (QLoRA)\n- Transformers Trainer\n- DataCollatorForSeq2Seq\n- Distributed Data Parallel (DDP) across two GPUs",
"#### Preprocessing\n\nManually created tokenized 'labels' for the dataset.\nPrompt template utilized basic template for instruction-tuning",
"### Hardware\n\nPerformed fine-tuning with 2 * A100 GPUs\n- Provided by Gnewsoft during work period\nModel and dataset are too large for free run sessions on Google Colab"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #en #dataset-databricks/databricks-dolly-15k #arxiv-2310.06825 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID\n\nMistralAI 7B model fine-tuned for 1 epoch on Dataricks instruction tuning dataset.",
"## Model Details",
"### Model Description\n\n- Developed by: Andrew Chahnwoo Park\n- Model type: Mistral\n- Language(s) (NLP): English\n- License: apache-2.0\n- Finetuned from model: mistralai/Mistral-7B-v0.1",
"### Mistral Repository\n\n- Repository: mistralai/Mistral-7B-v0.1",
"## Training Details",
"### Training Data\n\n- databricks/databricks-dolly-15k",
"### Training Procedure\n\n- Quantized Low-Rank Adaptation (QLoRA)\n- Transformers Trainer\n- DataCollatorForSeq2Seq\n- Distributed Data Parallel (DDP) across two GPUs",
"#### Preprocessing\n\nManually created tokenized 'labels' for the dataset.\nPrompt template utilized basic template for instruction-tuning",
"### Hardware\n\nPerformed fine-tuning with 2 * A100 GPUs\n- Provided by Gnewsoft during work period\nModel and dataset are too large for free run sessions on Google Colab"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | chiawei0411/blip2-opt-2.7b-646-220k-captions | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:33:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification | transformers | The RoBERTa model has been fine-tuned specifically for token classification in PII Detection task. | {"language": ["en"], "metrics": ["recall"]} | zeinab-sheikhi/Roberta-pii-detection-baseline | null | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:34:14+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #roberta #token-classification #en #autotrain_compatible #endpoints_compatible #region-us
| The RoBERTa model has been fine-tuned specifically for token classification in PII Detection task. | [] | [
"TAGS\n#transformers #safetensors #roberta #token-classification #en #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-chat-10000-25-75-L
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2200
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.13.3
| {"license": "llama2", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "llama-7b-chat-10000-25-75-L", "results": []}]} | Niyantha23M/llama-7b-chat-10000-25-75-L | null | [
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2024-04-18T06:35:35+00:00 | [] | [] | TAGS
#trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
|
# llama-7b-chat-10000-25-75-L
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2200
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.13.3
| [
"# llama-7b-chat-10000-25-75-L\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2200\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4400\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.33.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.13.3"
] | [
"TAGS\n#trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n",
"# llama-7b-chat-10000-25-75-L\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2200\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4400\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.33.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.13.3"
] |
object-detection | ultralytics |
# YOLOv8 model to detect wavy lines in images
## Inference
### Supported Labels
```python
["line"]
```
### How to use
```bash
pip install ultralyticsplus
```
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('best.pt')
# set image
image = "https://dl.ndl.go.jp/api/iiif/1879314/R0000039/full/640,640/0/default.jpg"
# perform inference
results = model.predict(image, verbose=False)
# observe results
render = render_result(model=model, image=image, result=results[0])
render.show()
```
| {"license": "cc-by-4.0", "library_name": "ultralytics", "tags": ["ultralyticsplus", "yolov8", "ultralytics"], "library_version": "8.0.43", "pipeline_tag": "object-detection"} | nakamura196/yolov8m-wavy-line-detection | null | [
"ultralytics",
"v8",
"ultralyticsplus",
"yolov8",
"object-detection",
"license:cc-by-4.0",
"model-index",
"region:us"
] | null | 2024-04-18T06:39:56+00:00 | [] | [] | TAGS
#ultralytics #v8 #ultralyticsplus #yolov8 #object-detection #license-cc-by-4.0 #model-index #region-us
|
# YOLOv8 model to detect wavy lines in images
## Inference
### Supported Labels
### How to use
| [
"# YOLOv8 model to detect wavy lines in images",
"## Inference",
"### Supported Labels",
"### How to use"
] | [
"TAGS\n#ultralytics #v8 #ultralyticsplus #yolov8 #object-detection #license-cc-by-4.0 #model-index #region-us \n",
"# YOLOv8 model to detect wavy lines in images",
"## Inference",
"### Supported Labels",
"### How to use"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Edgar404/donut-shivi-cheques | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:41:29+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | bertopic |
# TopicModel_StoreReviews
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("shantanudave/TopicModel_StoreReviews")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 10
* Number of training documents: 14747
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | clothing - clothes - fashion - clothe - clothing store | 2672 | Fashionable Clothing Selection |
| 1 | shopping - shop - price - cheap - store | 1864 | Diverse Shopping Experiences |
| 2 | tidy - clean - branch - range - renovation | 1807 | Clean Retail Space |
| 3 | quality - offer - use - stop - good | 1793 | Quality Offer Search |
| 4 | selection - choice - large - large selection - size | 1459 | Large Size Selection |
| 5 | advice - saleswoman - service - friendly - competent | 1447 | Friendly Saleswoman Service |
| 6 | staff - friendly staff - staff staff - staff friendly - friendly | 1177 | Friendly Staff Selection |
| 7 | wow - waw - oh - yeah - | 1108 | Expressive Words Discovery |
| 8 | voucher - money - return - exchange - cash | 933 | Customer Return Experience |
| 9 | super - friendly super - super friendly - pleasure - super service | 487 | super friendly service |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 1.3.5
* Scikit-Learn: 1.4.1.post1
* Sentence-transformers: 2.6.1
* Transformers: 4.39.3
* Numba: 0.59.1
* Plotly: 5.21.0
* Python: 3.10.13
| {"library_name": "bertopic", "tags": ["bertopic"], "pipeline_tag": "text-classification"} | shantanudave/TopicModel_StoreReviews | null | [
"bertopic",
"text-classification",
"region:us"
] | null | 2024-04-18T06:41:56+00:00 | [] | [] | TAGS
#bertopic #text-classification #region-us
| TopicModel\_StoreReviews
========================
This is a BERTopic model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
Usage
-----
To use this model, please install BERTopic:
You can use the model as follows:
Topic overview
--------------
* Number of topics: 10
* Number of training documents: 14747
Click here for an overview of all topics.
Training hyperparameters
------------------------
* calculate\_probabilities: True
* language: None
* low\_memory: False
* min\_topic\_size: 10
* n\_gram\_range: (1, 1)
* nr\_topics: None
* seed\_topic\_list: None
* top\_n\_words: 10
* verbose: True
* zeroshot\_min\_similarity: 0.7
* zeroshot\_topic\_list: None
Framework versions
------------------
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 1.3.5
* Scikit-Learn: 1.4.1.post1
* Sentence-transformers: 2.6.1
* Transformers: 4.39.3
* Numba: 0.59.1
* Plotly: 5.21.0
* Python: 3.10.13
| [] | [
"TAGS\n#bertopic #text-classification #region-us \n"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [psmathur/orca_mini_v3_13b](https://huggingface.co/psmathur/orca_mini_v3_13b)
* [garage-bAInd/Platypus2-13B](https://huggingface.co/garage-bAInd/Platypus2-13B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: psmathur/orca_mini_v3_13b
layer_range: [0, 24]
- sources:
- model: garage-bAInd/Platypus2-13B
layer_range: [20, 40]
merge_method: passthrough
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["psmathur/orca_mini_v3_13b", "garage-bAInd/Platypus2-13B"]} | Trisert/OrcaPlus | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:psmathur/orca_mini_v3_13b",
"base_model:garage-bAInd/Platypus2-13B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:42:00+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-psmathur/orca_mini_v3_13b #base_model-garage-bAInd/Platypus2-13B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* psmathur/orca_mini_v3_13b
* garage-bAInd/Platypus2-13B
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the passthrough merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* psmathur/orca_mini_v3_13b\n* garage-bAInd/Platypus2-13B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-psmathur/orca_mini_v3_13b #base_model-garage-bAInd/Platypus2-13B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the passthrough merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* psmathur/orca_mini_v3_13b\n* garage-bAInd/Platypus2-13B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
video-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-elderf1
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7031
- Accuracy: 0.3481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 720
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7358 | 0.1 | 73 | 1.6923 | 0.3408 |
| 1.7163 | 1.1 | 146 | 1.6662 | 0.3373 |
| 1.7018 | 2.1 | 219 | 1.6378 | 0.3408 |
| 1.7334 | 3.1 | 292 | 1.6563 | 0.3401 |
| 1.672 | 4.1 | 365 | 1.6568 | 0.2398 |
| 1.7095 | 5.1 | 438 | 1.6313 | 0.3387 |
| 1.7119 | 6.1 | 511 | 1.6309 | 0.3408 |
| 1.6981 | 7.1 | 584 | 1.6518 | 0.3289 |
| 1.7066 | 8.1 | 657 | 1.6313 | 0.3310 |
| 1.6476 | 9.09 | 720 | 1.6338 | 0.3289 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base", "model-index": [{"name": "videomae-base-finetuned-elderf1", "results": []}]} | minhah/videomae-base-finetuned-elderf1 | null | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:45:00+00:00 | [] | [] | TAGS
#transformers #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| videomae-base-finetuned-elderf1
===============================
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7031
* Accuracy: 0.3481
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 720
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 720",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 720",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb-naive
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 343 | 4.9636 | 0.5 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-imdb-naive", "results": []}]} | AmritaBh/distilbert-imdb-naive | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:45:17+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-imdb-naive
=====================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 1000
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 1000\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 1000\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
video-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ElderReact-anger-balanced-hp
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6938
- Accuracy: 0.4672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 480
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7532 | 0.05 | 25 | 0.7078 | 0.5238 |
| 0.7571 | 1.05 | 50 | 0.7034 | 0.4762 |
| 0.7357 | 2.05 | 75 | 0.7080 | 0.4429 |
| 0.6976 | 3.05 | 100 | 0.7160 | 0.5238 |
| 0.7131 | 4.05 | 125 | 0.6893 | 0.4714 |
| 0.7275 | 5.05 | 150 | 0.8350 | 0.4929 |
| 0.7334 | 6.05 | 175 | 0.7127 | 0.4738 |
| 0.7274 | 7.05 | 200 | 0.7088 | 0.5048 |
| 0.697 | 8.05 | 225 | 0.6911 | 0.5190 |
| 0.7605 | 9.05 | 250 | 0.7296 | 0.4976 |
| 0.7105 | 10.05 | 275 | 0.7100 | 0.4833 |
| 0.6745 | 11.05 | 300 | 0.7271 | 0.4548 |
| 0.7166 | 12.05 | 325 | 0.6955 | 0.5286 |
| 0.6849 | 13.05 | 350 | 0.6981 | 0.4976 |
| 0.6978 | 14.05 | 375 | 0.6976 | 0.4952 |
| 0.6928 | 15.05 | 400 | 0.6941 | 0.5405 |
| 0.7057 | 16.05 | 425 | 0.7022 | 0.5 |
| 0.6842 | 17.05 | 450 | 0.6943 | 0.4738 |
| 0.6824 | 18.05 | 475 | 0.6945 | 0.5167 |
| 0.7065 | 19.01 | 480 | 0.6948 | 0.5143 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base", "model-index": [{"name": "videomae-base-finetuned-ElderReact-anger-balanced-hp", "results": []}]} | minhah/videomae-base-finetuned-ElderReact-anger-balanced-hp | null | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:46:42+00:00 | [] | [] | TAGS
#transformers #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| videomae-base-finetuned-ElderReact-anger-balanced-hp
====================================================
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6938
* Accuracy: 0.4672
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 480
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 480",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 480",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zzttbrdd/sn6_05m](https://huggingface.co/zzttbrdd/sn6_05m)
* [zzttbrdd/sn6_07m](https://huggingface.co/zzttbrdd/sn6_07m)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: zzttbrdd/sn6_05m
layer_range: [0, 32]
- model: zzttbrdd/sn6_07m
layer_range: [0, 32]
merge_method: slerp
base_model: zzttbrdd/sn6_07m
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.3
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["zzttbrdd/sn6_05m", "zzttbrdd/sn6_07m"]} | Sumail/Ame11 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zzttbrdd/sn6_05m",
"base_model:zzttbrdd/sn6_07m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:49:18+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-zzttbrdd/sn6_05m #base_model-zzttbrdd/sn6_07m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* zzttbrdd/sn6_05m
* zzttbrdd/sn6_07m
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* zzttbrdd/sn6_05m\n* zzttbrdd/sn6_07m",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-zzttbrdd/sn6_05m #base_model-zzttbrdd/sn6_07m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* zzttbrdd/sn6_05m\n* zzttbrdd/sn6_07m",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Fine-tuning Mistral-7B-v0.1 on Symbolic Instruction Tuning Dataset
This repository contains the fine-tuned version of the `mistralai/Mistral-7B-v0.1` model on the `sail/symbolic-instruction-tuning` dataset. The objective of this fine-tuning process is to specialize the pre-trained model for improved performance on tasks that require understanding and processing symbolic instructions.
## Model Description
`Mistral-7B-v0.1` is a transformer-based language model pre-trained on a diverse corpus of text. Our fine-tuning process aims to leverage this pre-trained model and further optimize it for the symbolic instruction tuning task provided by the `sail/symbolic-instruction-tuning` dataset.
## Dataset
The `sail/symbolic-instruction-tuning` dataset is designed to test a model's ability to comprehend and execute symbolic instructions. It consists of a series of tasks that require the model to manipulate symbolic inputs according to specific instructions.
## Fine-tuning Process
The fine-tuning process involves the following steps:
1. **Environment Setup**: Ensure that your environment has all the necessary dependencies installed, including `transformers` and `datasets` from Hugging Face.
2. **Data Preparation**: Load the `sail/symbolic-instruction-tuning` dataset using the `datasets` library and prepare it for the training process, including any necessary preprocessing steps.
3. **Model Initialization**: Load the pre-trained `mistralai/Mistral-7B-v0.1` model and prepare it for fine-tuning.
4. **Training**: Fine-tune the model on the prepared dataset using an appropriate training script. This involves setting hyperparameters, training loops, and logging.
5. **Evaluation**: Evaluate the fine-tuned model's performance on a validation set to ensure that it has learned the task effectively.
6. **Saving and Sharing**: Save the fine-tuned model and upload it to the Hugging Face model hub for easy sharing and reuse.
## Usage
The fine-tuned model can be loaded from the Hugging Face model hub using the `transformers` library as follows:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "rootsec1/mistal-7B-it-aipi"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
inputs = tokenizer("Example input", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
| {"license": "apache-2.0", "tags": ["finetuned"], "pipeline_tag": "text-generation", "inference": true, "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | rootsec1/mistral-7B-it-aipi | null | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:50:01+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #mistral #text-generation #finetuned #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Fine-tuning Mistral-7B-v0.1 on Symbolic Instruction Tuning Dataset
This repository contains the fine-tuned version of the 'mistralai/Mistral-7B-v0.1' model on the 'sail/symbolic-instruction-tuning' dataset. The objective of this fine-tuning process is to specialize the pre-trained model for improved performance on tasks that require understanding and processing symbolic instructions.
## Model Description
'Mistral-7B-v0.1' is a transformer-based language model pre-trained on a diverse corpus of text. Our fine-tuning process aims to leverage this pre-trained model and further optimize it for the symbolic instruction tuning task provided by the 'sail/symbolic-instruction-tuning' dataset.
## Dataset
The 'sail/symbolic-instruction-tuning' dataset is designed to test a model's ability to comprehend and execute symbolic instructions. It consists of a series of tasks that require the model to manipulate symbolic inputs according to specific instructions.
## Fine-tuning Process
The fine-tuning process involves the following steps:
1. Environment Setup: Ensure that your environment has all the necessary dependencies installed, including 'transformers' and 'datasets' from Hugging Face.
2. Data Preparation: Load the 'sail/symbolic-instruction-tuning' dataset using the 'datasets' library and prepare it for the training process, including any necessary preprocessing steps.
3. Model Initialization: Load the pre-trained 'mistralai/Mistral-7B-v0.1' model and prepare it for fine-tuning.
4. Training: Fine-tune the model on the prepared dataset using an appropriate training script. This involves setting hyperparameters, training loops, and logging.
5. Evaluation: Evaluate the fine-tuned model's performance on a validation set to ensure that it has learned the task effectively.
6. Saving and Sharing: Save the fine-tuned model and upload it to the Hugging Face model hub for easy sharing and reuse.
## Usage
The fine-tuned model can be loaded from the Hugging Face model hub using the 'transformers' library as follows:
| [
"# Fine-tuning Mistral-7B-v0.1 on Symbolic Instruction Tuning Dataset\n\nThis repository contains the fine-tuned version of the 'mistralai/Mistral-7B-v0.1' model on the 'sail/symbolic-instruction-tuning' dataset. The objective of this fine-tuning process is to specialize the pre-trained model for improved performance on tasks that require understanding and processing symbolic instructions.",
"## Model Description\n\n'Mistral-7B-v0.1' is a transformer-based language model pre-trained on a diverse corpus of text. Our fine-tuning process aims to leverage this pre-trained model and further optimize it for the symbolic instruction tuning task provided by the 'sail/symbolic-instruction-tuning' dataset.",
"## Dataset\n\nThe 'sail/symbolic-instruction-tuning' dataset is designed to test a model's ability to comprehend and execute symbolic instructions. It consists of a series of tasks that require the model to manipulate symbolic inputs according to specific instructions.",
"## Fine-tuning Process\n\nThe fine-tuning process involves the following steps:\n\n1. Environment Setup: Ensure that your environment has all the necessary dependencies installed, including 'transformers' and 'datasets' from Hugging Face.\n\n2. Data Preparation: Load the 'sail/symbolic-instruction-tuning' dataset using the 'datasets' library and prepare it for the training process, including any necessary preprocessing steps.\n\n3. Model Initialization: Load the pre-trained 'mistralai/Mistral-7B-v0.1' model and prepare it for fine-tuning.\n\n4. Training: Fine-tune the model on the prepared dataset using an appropriate training script. This involves setting hyperparameters, training loops, and logging.\n\n5. Evaluation: Evaluate the fine-tuned model's performance on a validation set to ensure that it has learned the task effectively.\n\n6. Saving and Sharing: Save the fine-tuned model and upload it to the Hugging Face model hub for easy sharing and reuse.",
"## Usage\n\nThe fine-tuned model can be loaded from the Hugging Face model hub using the 'transformers' library as follows:"
] | [
"TAGS\n#transformers #pytorch #safetensors #mistral #text-generation #finetuned #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Fine-tuning Mistral-7B-v0.1 on Symbolic Instruction Tuning Dataset\n\nThis repository contains the fine-tuned version of the 'mistralai/Mistral-7B-v0.1' model on the 'sail/symbolic-instruction-tuning' dataset. The objective of this fine-tuning process is to specialize the pre-trained model for improved performance on tasks that require understanding and processing symbolic instructions.",
"## Model Description\n\n'Mistral-7B-v0.1' is a transformer-based language model pre-trained on a diverse corpus of text. Our fine-tuning process aims to leverage this pre-trained model and further optimize it for the symbolic instruction tuning task provided by the 'sail/symbolic-instruction-tuning' dataset.",
"## Dataset\n\nThe 'sail/symbolic-instruction-tuning' dataset is designed to test a model's ability to comprehend and execute symbolic instructions. It consists of a series of tasks that require the model to manipulate symbolic inputs according to specific instructions.",
"## Fine-tuning Process\n\nThe fine-tuning process involves the following steps:\n\n1. Environment Setup: Ensure that your environment has all the necessary dependencies installed, including 'transformers' and 'datasets' from Hugging Face.\n\n2. Data Preparation: Load the 'sail/symbolic-instruction-tuning' dataset using the 'datasets' library and prepare it for the training process, including any necessary preprocessing steps.\n\n3. Model Initialization: Load the pre-trained 'mistralai/Mistral-7B-v0.1' model and prepare it for fine-tuning.\n\n4. Training: Fine-tune the model on the prepared dataset using an appropriate training script. This involves setting hyperparameters, training loops, and logging.\n\n5. Evaluation: Evaluate the fine-tuned model's performance on a validation set to ensure that it has learned the task effectively.\n\n6. Saving and Sharing: Save the fine-tuned model and upload it to the Hugging Face model hub for easy sharing and reuse.",
"## Usage\n\nThe fine-tuned model can be loaded from the Hugging Face model hub using the 'transformers' library as follows:"
] |
null | transformers |
# Trisert/OrcaPlus-Q4_K_S-GGUF
This model was converted to GGUF format from [`Trisert/OrcaPlus`](https://huggingface.co/Trisert/OrcaPlus) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Trisert/OrcaPlus) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Trisert/OrcaPlus-Q4_K_S-GGUF --model orcaplus.Q4_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Trisert/OrcaPlus-Q4_K_S-GGUF --model orcaplus.Q4_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m orcaplus.Q4_K_S.gguf -n 128
```
| {"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["psmathur/orca_mini_v3_13b", "garage-bAInd/Platypus2-13B"]} | Trisert/OrcaPlus-Q4_K_S-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:psmathur/orca_mini_v3_13b",
"base_model:garage-bAInd/Platypus2-13B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:50:13+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-psmathur/orca_mini_v3_13b #base_model-garage-bAInd/Platypus2-13B #endpoints_compatible #region-us
|
# Trisert/OrcaPlus-Q4_K_S-GGUF
This model was converted to GGUF format from 'Trisert/OrcaPlus' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# Trisert/OrcaPlus-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'Trisert/OrcaPlus' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-psmathur/orca_mini_v3_13b #base_model-garage-bAInd/Platypus2-13B #endpoints_compatible #region-us \n",
"# Trisert/OrcaPlus-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'Trisert/OrcaPlus' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# Clevyby/Red-Daffodil-7B-Q5_K_S-GGUF
This model was converted to GGUF format from [`nakodanei/Red-Daffodil-7B`](https://huggingface.co/nakodanei/Red-Daffodil-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nakodanei/Red-Daffodil-7B) for more details on the model.
### Note:
The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf. | {"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]} | Clevyby/Red-Daffodil-7B-Q5_K_S-GGUF | null | [
"transformers",
"gguf",
"mistral",
"text-generation",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:51:34+00:00 | [] | [] | TAGS
#transformers #gguf #mistral #text-generation #llama-cpp #gguf-my-repo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Clevyby/Red-Daffodil-7B-Q5_K_S-GGUF
This model was converted to GGUF format from 'nakodanei/Red-Daffodil-7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
### Note:
The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf. | [
"# Clevyby/Red-Daffodil-7B-Q5_K_S-GGUF\nThis model was converted to GGUF format from 'nakodanei/Red-Daffodil-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### Note: \nThe additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf."
] | [
"TAGS\n#transformers #gguf #mistral #text-generation #llama-cpp #gguf-my-repo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Clevyby/Red-Daffodil-7B-Q5_K_S-GGUF\nThis model was converted to GGUF format from 'nakodanei/Red-Daffodil-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### Note: \nThe additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf."
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/CarrotAI/OpenCarrot-Mix-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCarrot-Mix-7B-GGUF/resolve/main/OpenCarrot-Mix-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "CarrotAI/OpenCarrot-Mix-7B", "quantized_by": "mradermacher"} | mradermacher/OpenCarrot-Mix-7B-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:CarrotAI/OpenCarrot-Mix-7B",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:51:56+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #en #base_model-CarrotAI/OpenCarrot-Mix-7B #license-mit #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #mergekit #merge #en #base_model-CarrotAI/OpenCarrot-Mix-7B #license-mit #endpoints_compatible #region-us \n"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cointegrated/SONAR_200_converted_text_encoder | null | [
"transformers",
"safetensors",
"m2m_100",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:52:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #m2m_100 #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #m2m_100 #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | zandfj/LLaMA2-7B-Chat-lora-nq-tet-robust-041814 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:52:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | abhayesian/BobzillaV27 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:54:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** nzwildcode
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | nzwildcode/FlasherAI-v0.1-7B | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:54:51+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: nzwildcode
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: nzwildcode\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: nzwildcode\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cointegrated/SONAR_200_converted_text_decoder | null | [
"transformers",
"safetensors",
"m2m_100",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:54:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #m2m_100 #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #m2m_100 #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Clevyby/SnowyRP-V2-13B-L2_BetaTest-Q4_K_M-GGUF
This model was converted to GGUF format from [`Masterjp123/SnowyRP-V2-13B-L2_BetaTest`](https://huggingface.co/Masterjp123/SnowyRP-V2-13B-L2_BetaTest) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Masterjp123/SnowyRP-V2-13B-L2_BetaTest) for more details on the model.
### Note:
The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf. | {"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["TheBloke/Llama-2-13B-fp16", "Masterjp123/SnowyRP-FinalV1-L2-13B", "Masterjp123/Snowyrp-V2B-P1", "sauce1337/BerrySauce-L2-13b"]} | Clevyby/SnowyRP-V2-13B-L2_BetaTest-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"llama",
"text-generation",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:TheBloke/Llama-2-13B-fp16",
"base_model:Masterjp123/SnowyRP-FinalV1-L2-13B",
"base_model:Masterjp123/Snowyrp-V2B-P1",
"base_model:sauce1337/BerrySauce-L2-13b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:56:09+00:00 | [] | [] | TAGS
#transformers #gguf #llama #text-generation #mergekit #merge #llama-cpp #gguf-my-repo #base_model-TheBloke/Llama-2-13B-fp16 #base_model-Masterjp123/SnowyRP-FinalV1-L2-13B #base_model-Masterjp123/Snowyrp-V2B-P1 #base_model-sauce1337/BerrySauce-L2-13b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Clevyby/SnowyRP-V2-13B-L2_BetaTest-Q4_K_M-GGUF
This model was converted to GGUF format from 'Masterjp123/SnowyRP-V2-13B-L2_BetaTest' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
### Note:
The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf. | [
"# Clevyby/SnowyRP-V2-13B-L2_BetaTest-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Masterjp123/SnowyRP-V2-13B-L2_BetaTest' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### Note: \nThe additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf."
] | [
"TAGS\n#transformers #gguf #llama #text-generation #mergekit #merge #llama-cpp #gguf-my-repo #base_model-TheBloke/Llama-2-13B-fp16 #base_model-Masterjp123/SnowyRP-FinalV1-L2-13B #base_model-Masterjp123/Snowyrp-V2B-P1 #base_model-sauce1337/BerrySauce-L2-13b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Clevyby/SnowyRP-V2-13B-L2_BetaTest-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Masterjp123/SnowyRP-V2-13B-L2_BetaTest' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### Note: \nThe additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | lxsure/Sniper_28 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:56:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs
This is a model for the paper "[MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs](https://arxiv.org/pdf/2402.16352.pdf)".
## News
- **[2024-02-26]** Our paper is now accessible at [ArXiv Paper](https://arxiv.org/pdf/2402.16352.pdf).
## Introduction
Large language models (LLMs) have exhibited great potential in mathematical reasoning. However, there remains a performance gap in this area between existing open-source models and closed-source models such as GPT-4.
In this paper, we introduce **MathGenie**, a novel method for generating diverse and reliable math problems from a small-scale problem-solution dataset (denoted as *seed data*). We augment the ground-truth solutions of our seed data and train a back-translation model to translate the augmented solutions back into new questions. Subsequently, we generate code-integrated solutions for the new questions. To ensure the correctness of the code-integrated solutions, we employ rationale-based strategy for solution verification.
Various pretrained models, ranging from 7B to 70B, are trained on the newly curated data to test the effectiveness of the proposed augmentation technique, resulting in a family of models known as *MathGenieLM*. These models consistently outperform previous open-source models across five representative mathematical reasoning datasets, achieving state-of-the-art performance. In particular, MathGenieLM-InternLM2 achieves an accuracy of 87.7% on GSM8K and 55.7% on MATH, securing the best overall score among open-source language models.
You can refer to the [project homepage](https://mathgenie.github.io/) and [the paper](https://arxiv.org/pdf/2402.16352.pdf) for more details.
## Usage
### Models
Our [MathGenie-InterLM-20B](https://huggingface.co/MathGenie/MathGenie-InterLM-20B) model is available at Huggingface now.
Our [MathGenie-Mixtral-8x7B](https://huggingface.co/MathGenie/MathGenie-Mixtral-8x7B) model is available at Huggingface now.
| Base Model | Model |
| ------------ | ------------------------------------------------------------ |
| InternLM-20B | [MathGenie-InterLM-20B](https://huggingface.co/MathGenie/MathGenie-InterLM-20B) |
| Mixtral-8x7B | [MathGenie-Mixtral-8x7B](https://huggingface.co/MathGenie/MathGenie-Mixtral-8x7B) |
### Inference & Evaluation
Please refer to the [MathCoder repo](https://github.com/mathllm/MathCoder) for the detailed code for inference and evaluation of our MathGenieLM models.
## Citation
If you find this paper helpful to your research, please kindly cite this BibTex:
```
@misc{lu2024mathgenie,
title={MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs},
author={Zimu Lu and Aojun Zhou and Houxing Ren and Ke Wang and Weikang Shi and Junting Pan and Mingjie Zhan and Hongsheng Li},
year={2024},
eprint={2402.16352},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@inproceedings{
wang2024mathcoder,
title={MathCoder: Seamless Code Integration in {LLM}s for Enhanced Mathematical Reasoning},
author={Ke Wang and Houxing Ren and Aojun Zhou and Zimu Lu and Sichun Luo and Weikang Shi and Renrui Zhang and Linqi Song and Mingjie Zhan and Hongsheng Li},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=z8TW0ttBPp}
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["code", "math"], "metrics": ["accuracy"], "pipeline_tag": "text-generation"} | MathGenie/MathGenie-Mixtral-8x7B | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"code",
"math",
"en",
"arxiv:2402.16352",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T06:57:17+00:00 | [
"2402.16352"
] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #code #math #en #arxiv-2402.16352 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs
================================================================================================================
This is a model for the paper "MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs".
News
----
* [2024-02-26] Our paper is now accessible at ArXiv Paper.
Introduction
------------
Large language models (LLMs) have exhibited great potential in mathematical reasoning. However, there remains a performance gap in this area between existing open-source models and closed-source models such as GPT-4.
In this paper, we introduce MathGenie, a novel method for generating diverse and reliable math problems from a small-scale problem-solution dataset (denoted as *seed data*). We augment the ground-truth solutions of our seed data and train a back-translation model to translate the augmented solutions back into new questions. Subsequently, we generate code-integrated solutions for the new questions. To ensure the correctness of the code-integrated solutions, we employ rationale-based strategy for solution verification.
Various pretrained models, ranging from 7B to 70B, are trained on the newly curated data to test the effectiveness of the proposed augmentation technique, resulting in a family of models known as *MathGenieLM*. These models consistently outperform previous open-source models across five representative mathematical reasoning datasets, achieving state-of-the-art performance. In particular, MathGenieLM-InternLM2 achieves an accuracy of 87.7% on GSM8K and 55.7% on MATH, securing the best overall score among open-source language models.
You can refer to the project homepage and the paper for more details.
Usage
-----
### Models
Our MathGenie-InterLM-20B model is available at Huggingface now.
Our MathGenie-Mixtral-8x7B model is available at Huggingface now.
### Inference & Evaluation
Please refer to the MathCoder repo for the detailed code for inference and evaluation of our MathGenieLM models.
If you find this paper helpful to your research, please kindly cite this BibTex:
| [
"### Models\n\n\nOur MathGenie-InterLM-20B model is available at Huggingface now.\nOur MathGenie-Mixtral-8x7B model is available at Huggingface now.",
"### Inference & Evaluation\n\n\nPlease refer to the MathCoder repo for the detailed code for inference and evaluation of our MathGenieLM models.\n\n\nIf you find this paper helpful to your research, please kindly cite this BibTex:"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #code #math #en #arxiv-2402.16352 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Models\n\n\nOur MathGenie-InterLM-20B model is available at Huggingface now.\nOur MathGenie-Mixtral-8x7B model is available at Huggingface now.",
"### Inference & Evaluation\n\n\nPlease refer to the MathCoder repo for the detailed code for inference and evaluation of our MathGenieLM models.\n\n\nIf you find this paper helpful to your research, please kindly cite this BibTex:"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/garage-bAInd/Camel-Platypus2-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Camel-Platypus2-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Camel-Platypus2-70B-GGUF/resolve/main/Camel-Platypus2-70B.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "datasets": ["garage-bAInd/Open-Platypus"], "base_model": "garage-bAInd/Camel-Platypus2-70B", "quantized_by": "mradermacher"} | mradermacher/Camel-Platypus2-70B-GGUF | null | [
"transformers",
"gguf",
"en",
"dataset:garage-bAInd/Open-Platypus",
"base_model:garage-bAInd/Camel-Platypus2-70B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T06:59:36+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #dataset-garage-bAInd/Open-Platypus #base_model-garage-bAInd/Camel-Platypus2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #dataset-garage-bAInd/Open-Platypus #base_model-garage-bAInd/Camel-Platypus2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ResplendentAI/Aura_v3_7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_v3_7B-GGUF/resolve/main/Aura_v3_7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "ResplendentAI/Aura_v3_7B", "quantized_by": "mradermacher"} | mradermacher/Aura_v3_7B-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:ResplendentAI/Aura_v3_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:01:29+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-ResplendentAI/Aura_v3_7B #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-ResplendentAI/Aura_v3_7B #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_6iters_iter_3
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_6iters_iter_2](https://huggingface.co/ShenaoZ/0.001_ablation_6iters_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_ablation_6iters_iter_2", "model-index": [{"name": "0.001_ablation_6iters_iter_3", "results": []}]} | ShenaoZ/0.001_ablation_6iters_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_6iters_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:02:11+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_6iters_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_ablation_6iters_iter_3
This model is a fine-tuned version of ShenaoZ/0.001_ablation_6iters_iter_2 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.001_ablation_6iters_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_6iters_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_6iters_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_ablation_6iters_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_6iters_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_ablation_6iters_iter_3
This model is a fine-tuned version of [ShenaoZ/0.0_ablation_6iters_iter_2](https://huggingface.co/ShenaoZ/0.0_ablation_6iters_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0_ablation_6iters_iter_2", "model-index": [{"name": "0.0_ablation_6iters_iter_3", "results": []}]} | ShenaoZ/0.0_ablation_6iters_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0_ablation_6iters_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:05:01+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0_ablation_6iters_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.0_ablation_6iters_iter_3
This model is a fine-tuned version of ShenaoZ/0.0_ablation_6iters_iter_2 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.0_ablation_6iters_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.0_ablation_6iters_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0_ablation_6iters_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.0_ablation_6iters_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.0_ablation_6iters_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | transformers |
# nzwildcode/FlasherAI-v0.1-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nzwildcode/FlasherAI-v0.1-7B`](https://huggingface.co/nzwildcode/FlasherAI-v0.1-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nzwildcode/FlasherAI-v0.1-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo nzwildcode/FlasherAI-v0.1-7B-Q4_K_M-GGUF --model flasherai-v0.1-7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo nzwildcode/FlasherAI-v0.1-7B-Q4_K_M-GGUF --model flasherai-v0.1-7b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m flasherai-v0.1-7b.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft", "llama-cpp", "gguf-my-repo"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | nzwildcode/FlasherAI-v0.1-7B-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:06:03+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #sft #llama-cpp #gguf-my-repo #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# nzwildcode/FlasherAI-v0.1-7B-Q4_K_M-GGUF
This model was converted to GGUF format from 'nzwildcode/FlasherAI-v0.1-7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# nzwildcode/FlasherAI-v0.1-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'nzwildcode/FlasherAI-v0.1-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #sft #llama-cpp #gguf-my-repo #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# nzwildcode/FlasherAI-v0.1-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'nzwildcode/FlasherAI-v0.1-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('xfddlm/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | xfddlm/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-04-18T07:06:15+00:00 | [] | [] | TAGS
#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
|
# Model Card for Unit 1 of the Diffusion Models Class
This model is a diffusion model for unconditional image generation of cute .
## Usage
| [
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] | [
"TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n",
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | SOUMYADEEPSAR/convbert_transfer | null | [
"transformers",
"safetensors",
"convbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:09:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #convbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #convbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Clevyby/Foredoomed-9B-Q5_K_S-GGUF
This model was converted to GGUF format from [`CalderaAI/Foredoomed-9B`](https://huggingface.co/CalderaAI/Foredoomed-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CalderaAI/Foredoomed-9B) for more details on the model.
### Note:
The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf. | {"language": ["en"], "license": "apache-2.0", "tags": ["mistral", "uncensored", "merge", "slerp", "foredoomed", "passthrough_merge", "9B", "starling", "hermes", "dolphin", "openchat", "erebus", "cockatrice", "holodeck", "limarp", "koboldai", "mergekit", "llama-cpp", "gguf-my-repo"]} | Clevyby/Foredoomed-9B-Q5_K_S-GGUF | null | [
"transformers",
"gguf",
"mistral",
"text-generation",
"uncensored",
"merge",
"slerp",
"foredoomed",
"passthrough_merge",
"9B",
"starling",
"hermes",
"dolphin",
"openchat",
"erebus",
"cockatrice",
"holodeck",
"limarp",
"koboldai",
"mergekit",
"llama-cpp",
"gguf-my-repo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:12:39+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mistral #text-generation #uncensored #merge #slerp #foredoomed #passthrough_merge #9B #starling #hermes #dolphin #openchat #erebus #cockatrice #holodeck #limarp #koboldai #mergekit #llama-cpp #gguf-my-repo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Clevyby/Foredoomed-9B-Q5_K_S-GGUF
This model was converted to GGUF format from 'CalderaAI/Foredoomed-9B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
### Note:
The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf. | [
"# Clevyby/Foredoomed-9B-Q5_K_S-GGUF\nThis model was converted to GGUF format from 'CalderaAI/Foredoomed-9B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### Note: \nThe additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf."
] | [
"TAGS\n#transformers #gguf #mistral #text-generation #uncensored #merge #slerp #foredoomed #passthrough_merge #9B #starling #hermes #dolphin #openchat #erebus #cockatrice #holodeck #limarp #koboldai #mergekit #llama-cpp #gguf-my-repo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Clevyby/Foredoomed-9B-Q5_K_S-GGUF\nThis model was converted to GGUF format from 'CalderaAI/Foredoomed-9B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### Note: \nThe additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf."
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125m_LAMA_TREx_finetuning
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 1.13.1
- Datasets 2.14.5
- Tokenizers 0.13.3
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/gpt-neo-125m", "model-index": [{"name": "gpt-neo-125m_LAMA_TREx_finetuning", "results": []}]} | KimByeongSu/gpt-neo-125m_LAMA_TREx_finetuning | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:12:56+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #generated_from_trainer #base_model-EleutherAI/gpt-neo-125m #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# gpt-neo-125m_LAMA_TREx_finetuning
This model is a fine-tuned version of EleutherAI/gpt-neo-125m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 1.13.1
- Datasets 2.14.5
- Tokenizers 0.13.3
| [
"# gpt-neo-125m_LAMA_TREx_finetuning\n\nThis model is a fine-tuned version of EleutherAI/gpt-neo-125m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 128\n- eval_batch_size: 128\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.33.2\n- Pytorch 1.13.1\n- Datasets 2.14.5\n- Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #generated_from_trainer #base_model-EleutherAI/gpt-neo-125m #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# gpt-neo-125m_LAMA_TREx_finetuning\n\nThis model is a fine-tuned version of EleutherAI/gpt-neo-125m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 128\n- eval_batch_size: 128\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.33.2\n- Pytorch 1.13.1\n- Datasets 2.14.5\n- Tokenizers 0.13.3"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# solar_detection
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "solar_detection", "results": []}]} | michalszy888/solar_detection | null | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:13:31+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
|
# solar_detection
This model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# solar_detection\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 10\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# solar_detection\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 10\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
### bitnet_b1_58-3B-Coder
Code finetuned version of [bitnet_b1_58-3B](https://huggingface.co/1bitLLM/bitnet_b1_58-3B)
### Usage
```python
from tokenization_bitnet import BitnetTokenizer
from transformers import AutoModelForCausalLM
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
tokenizer = BitnetTokenizer.from_pretrained(
"TechxGenus/bitnet_b1_58-3B-Coder",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/bitnet_b1_58-3B-Coder",
torch_dtype=torch.float16,
device_map="auto",
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=2048)
print(tokenizer.decode(outputs[0]))
```
### Note
Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
| {"license": "mit"} | TechxGenus/bitnet_b1_58-3B-Coder | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:13:44+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
### bitnet_b1_58-3B-Coder
Code finetuned version of bitnet_b1_58-3B
### Usage
### Note
Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
| [
"### bitnet_b1_58-3B-Coder\n\nCode finetuned version of bitnet_b1_58-3B",
"### Usage",
"### Note\n\nModel may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### bitnet_b1_58-3B-Coder\n\nCode finetuned version of bitnet_b1_58-3B",
"### Usage",
"### Note\n\nModel may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments."
] |
text-generation | transformers |
# Clevyby/Nomachi-7b-v1-Q5_K_S-GGUF
This model was converted to GGUF format from [`nakodanei/Nomachi-7b-v1`](https://huggingface.co/nakodanei/Nomachi-7b-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nakodanei/Nomachi-7b-v1) for more details on the model.
### Note:
The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf. | {"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]} | Clevyby/Nomachi-7b-v1-Q5_K_S-GGUF | null | [
"transformers",
"gguf",
"mistral",
"text-generation",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:18:25+00:00 | [] | [] | TAGS
#transformers #gguf #mistral #text-generation #llama-cpp #gguf-my-repo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Clevyby/Nomachi-7b-v1-Q5_K_S-GGUF
This model was converted to GGUF format from 'nakodanei/Nomachi-7b-v1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
### Note:
The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf. | [
"# Clevyby/Nomachi-7b-v1-Q5_K_S-GGUF\nThis model was converted to GGUF format from 'nakodanei/Nomachi-7b-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### Note: \nThe additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf."
] | [
"TAGS\n#transformers #gguf #mistral #text-generation #llama-cpp #gguf-my-repo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Clevyby/Nomachi-7b-v1-Q5_K_S-GGUF\nThis model was converted to GGUF format from 'nakodanei/Nomachi-7b-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### Note: \nThe additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6599 | 0.54 | 500 | 1.4833 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "google/pegasus-cnn_dailymail", "model-index": [{"name": "pegasus-samsum", "results": []}]} | SeohyeonYoo/pegasus-samsum | null | [
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:19:42+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-cnn_dailymail #autotrain_compatible #endpoints_compatible #region-us
| pegasus-samsum
==============
This model is a fine-tuned version of google/pegasus-cnn\_dailymail on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4833
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-cnn_dailymail #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/ibivibiv/orthorus_v3_125b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/orthorus_v3_125b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ1_S.gguf) | i1-IQ1_S | 26.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ1_M.gguf) | i1-IQ1_M | 28.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 33.1 | |
| [GGUF](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.8 | |
| [GGUF](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ2_S.gguf) | i1-IQ2_S | 38.1 | |
| [GGUF](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ2_M.gguf) | i1-IQ2_M | 41.6 | |
| [GGUF](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q2_K.gguf) | i1-Q2_K | 46.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 48.3 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 51.3 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 54.2 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 54.2 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.6 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 60.2 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 65.3 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 66.9 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 71.0 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 71.4 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 75.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 86.3 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 88.9 | |
| [PART 1](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/orthorus_v3_125b-i1-GGUF/resolve/main/orthorus_v3_125b.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 102.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "ibivibiv/orthorus_v3_125b", "quantized_by": "mradermacher"} | mradermacher/orthorus_v3_125b-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:ibivibiv/orthorus_v3_125b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:20:04+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-ibivibiv/orthorus_v3_125b #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-ibivibiv/orthorus_v3_125b #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | chiangcw/zephyr-7b-beta-Agent-Instruct_e10 | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:21:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # StarAntler-RP-WestLake-chatvector
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
このモデルはChatVector手法を用いてNSFW手法を強化したモデル2つをマージさせたモデルです。
まず、1つ目のモデルはChatVector手法を使用して、Aratrakoさん制作の[Aratako/Antler-7B-RP](https://huggingface.co/Aratako/Antler-7B-RP)を[senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)を用いてNSFW能力を強化しています。
2つ目のモデルも同様にAratrakoさん制作の[Japanese-Starling-ChatV-7B-RP ](https://huggingface.co/Aratako/Japanese-Starling-ChatV-7B-RP)を[senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)を用いてNSFW能力を強化しています。
これら2つのモデルを同等の重みでマージしたモデルです。マージの詳細は以下の通りです。
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [soramikaduki/Antler-RP-ja-westlake-chatvector](https://huggingface.co/soramikaduki/Antler-RP-ja-westlake-chatvector) as a base.
### Models Merged
The following models were included in the merge:
* [soramikaduki/Starling-RP-ja-westlake-chatvector](https://huggingface.co/soramikaduki/Starling-RP-ja-westlake-chatvector)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: soramikaduki/Antler-RP-ja-westlake-chatvector
parameters:
density: 0.5
weight:
- filter: mlp
value: 0.5
- value: 0.5
# No parameters necessary for base model
- model: soramikaduki/Starling-RP-ja-westlake-chatvector
parameters:
density: 0.5
weight:
- filter: mlp
value: 0.5
- value: 0.5
merge_method: dare_ties
base_model: soramikaduki/Antler-RP-ja-westlake-chatvector
parameters:
int8_mask: true
dtype: bfloat16
tokenizer_source: union
custom_methods:
model.embed_tokens:
method: tokenizer_permutation
parameters:
weight:
soramikaduki/Antler-RP-ja-westlake-chatvector: 0.5
soramikaduki/Starling-RP-ja-westlake-chatvector: 0.5
lm_head:
method: tokenizer_permutation
parameters:
weight:
soramikaduki/Antler-RP-ja-westlake-chatvector: 0.5
soramikaduki/Starling-RP-ja-westlake-chatvector: 0.5
```
### Performance
<table>
<tr>
<th>Model</th>
<th>StarAntler-RP-WestLake-chatvector (This model)</th>
</tr>
<tr>
<td>Parameters</td>
<td>7B(Mistral)</td>
</tr>
<tr>
<td>ELYZAtasks100<br>average score</td>
<td>3.16</td>
</tr>
</table>
Scores on "<a href="https://huggingface.co/datasets/elyza/ELYZA-tasks-100">ELYZA-tasks-100</a>"
このスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、gpt-4-0125-previewにより評価させたものです。 | {"language": ["ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge", "not-for-all-audiences"], "base_model": ["soramikaduki/Antler-RP-ja-westlake-chatvector", "soramikaduki/Starling-RP-ja-westlake-chatvector"]} | soramikaduki/StarAntler-RP-WestLake-chatvector | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"ja",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:soramikaduki/Antler-RP-ja-westlake-chatvector",
"base_model:soramikaduki/Starling-RP-ja-westlake-chatvector",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:21:52+00:00 | [
"2311.03099",
"2306.01708"
] | [
"ja"
] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #not-for-all-audiences #ja #arxiv-2311.03099 #arxiv-2306.01708 #base_model-soramikaduki/Antler-RP-ja-westlake-chatvector #base_model-soramikaduki/Starling-RP-ja-westlake-chatvector #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| StarAntler-RP-WestLake-chatvector
=================================
This is a merge of pre-trained language models created using mergekit.
このモデルはChatVector手法を用いてNSFW手法を強化したモデル2つをマージさせたモデルです。
まず、1つ目のモデルはChatVector手法を使用して、Aratrakoさん制作のAratako/Antler-7B-RPをsenseable/WestLake-7B-v2を用いてNSFW能力を強化しています。
2つ目のモデルも同様にAratrakoさん制作のJapanese-Starling-ChatV-7B-RP をsenseable/WestLake-7B-v2を用いてNSFW能力を強化しています。
これら2つのモデルを同等の重みでマージしたモデルです。マージの詳細は以下の通りです。
Merge Details
-------------
### Merge Method
This model was merged using the DARE TIES merge method using soramikaduki/Antler-RP-ja-westlake-chatvector as a base.
### Models Merged
The following models were included in the merge:
* soramikaduki/Starling-RP-ja-westlake-chatvector
### Configuration
The following YAML configuration was used to produce this model:
### Performance
Scores on "<a href="URL
このスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、gpt-4-0125-previewにより評価させたものです。
| [
"### Merge Method\n\n\nThis model was merged using the DARE TIES merge method using soramikaduki/Antler-RP-ja-westlake-chatvector as a base.",
"### Models Merged\n\n\nThe following models were included in the merge:\n\n\n* soramikaduki/Starling-RP-ja-westlake-chatvector",
"### Configuration\n\n\nThe following YAML configuration was used to produce this model:",
"### Performance\n\n\n\nScores on \"<a href=\"URL\n\n\nこのスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、gpt-4-0125-previewにより評価させたものです。"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #not-for-all-audiences #ja #arxiv-2311.03099 #arxiv-2306.01708 #base_model-soramikaduki/Antler-RP-ja-westlake-chatvector #base_model-soramikaduki/Starling-RP-ja-westlake-chatvector #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Merge Method\n\n\nThis model was merged using the DARE TIES merge method using soramikaduki/Antler-RP-ja-westlake-chatvector as a base.",
"### Models Merged\n\n\nThe following models were included in the merge:\n\n\n* soramikaduki/Starling-RP-ja-westlake-chatvector",
"### Configuration\n\n\nThe following YAML configuration was used to produce this model:",
"### Performance\n\n\n\nScores on \"<a href=\"URL\n\n\nこのスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、gpt-4-0125-previewにより評価させたものです。"
] |
null | null |
# <a name="introduction"></a> KeyBERTVi - Keyword Extraction for Vietnamese language
Inspired by [KeyBERT](https://github.com/MaartenGr/KeyBERT), KeyBERTVi implements a similar keyword extraction technique that leverages the embeddings of [PhoBERT](https://huggingface.co/vinai/phobert-base) and minimal linguistics properties to extract keywords and keyphrases that are most similar to the document.
<a name="toc"/></a>
## Table of Contents
<!--ts-->
1. [About the Project](#about)
2. [Getting Started](#gettingstarted)
2.1. [Installation](#installation)
2.2. [Basic Usage](#usage)
2.3. [Diversify Results](#diversify)
3. [Limitations](#limitations)
<!--te-->
<a name="about"/></a>
## 1. About the Project
This implementation took inspiration from the simple yet intuitive and powerful method of [KeyBERT](https://github.com/MaartenGr/KeyBERT/), applied for the Vietnamese language. PhoBERT are used to generate both document-level embeddings and word-level embeddings for extracted N-grams. Cosine similarity is then used to compute which N-grams are most similar to the document-level embedding, thus can be perceived as most representative of the document.
Preprocessing catered to the Vietnamese language was applied.
Test with your own documents at [KeyBERTVi Space](https://huggingface.co/spaces/tpha4308/keybertvi-app).
<a name="gettingstarted"/></a>
## 2. Getting Started
<a name="installation"/></a>
### 2.1. Setting up
```bash
git clone https://huggingface.co/tpha4308/keyword-extraction-viet
```
You can use existing pre-trained models in the repo or download your own and put them in `pretrained-models` folder.
```python
phobert = AutoModel.from_pretrained("vinai/phobert-base-v2")
phobert.eval()
torch.save(phobert, f'{dir_path}/pretrained-models/phobert.pt')
ner_model = AutoModelForTokenClassification.from_pretrained("NlpHUST/ner-vietnamese-electra-base")
ner_model.eval()
torch.save(ner_model, f'{dir_path}/pretrained-models/ner-vietnamese-electra-base.pt')
```
**Note:** `dir_path` is the absolute path to the repo.
As [PhoBERT](https://huggingface.co/vinai/phobert-base) requires [VnCoreNLP](https://github.com/vncorenlp/VnCoreNLP) as part of pre-processing, the folder `pretrained-models/vncorenlp` is required. To download your own:
```bash
pip install py_vncorenlp
```
```python
import py_vncorenlp
py_vncorenlp.download_model(save_dir=f'{dir_path}/pretrained-models/vncorenlp')
```
<a name="usage"/></a>
### 2.2. Basic Usage
```python
phobert = torch.load(f'{dir_path}/pretrained-models/phobert.pt')
phobert.eval()
ner_model = torch.load(f'{dir_path}/pretrained-models/ner-vietnamese-electra-base.pt')
ner_model.eval()
kw_pipeline = KeywordExtractorPipeline(phobert, ner_model)
```
```python
title = "Truyền thuyết và hiện tại Thành Cổ Loa"
text = """
Nhắc đến Cổ Loa, người ta nghĩ ngay đến truyền thuyết về An Dương Vương được thần Kim Quy bày cho cách xây thành, về chiếc lẫy nỏ thần làm từ móng chân rùa thần và mối tình bi thương Mỵ Châu – Trọng Thủy. Đằng sau những câu chuyện thiên về tâm linh ấy, thế hệ con cháu còn khám phá được những giá trị khảo cổ to lớn của Cổ Loa.
Khu di tích Cổ Loa cách trung – tâm Hà Nội 17km thuộc huyện Đông Anh, Hà Nội, có diện tích bảo tồn gần 500ha được coi là địa chỉ văn hóa đặc biệt của thủ đô và cả nước. Cổ Loa có hàng loạt di chỉ khảo cổ học đã được phát hiện, phản ánh quá trình phát triển liên tục của dân tộc ta từ sơ khai qua các thời kỳ đồ đồng, đồ đá và đồ sắt mà đỉnh cao là văn hóa Đông Sơn, vẫn được coi là nền văn minh sông Hồng thời kỳ tiền sử của dân tộc Việt Nam.
Cổ Loa từng là kinh đô của nhà nước Âu Lạc thời kỳ An Dương Vương (thế kỷ III TCN) và của nước Đại Việt thời Ngô Quyền (thế kỷ X) mà thành Cổ Loa là một di tích minh chứng còn lại cho đến ngày nay. Thành Cổ Loa được các nhà khảo cổ học đánh giá là “tòa thành cổ nhất, quy mô lớn vào bậc nhất, cấu trúc cũng thuộc loại độc đáo nhất trong lịch sử xây dựng thành lũy của người Việt cổ”.
"""
inp = {"title": title, "text": text}
kws = kw_pipeline(inputs=inp, min_freq=1, ngram_n=(1, 3), top_n=5, diversify_result=False)
[('Khu di_tích Cổ_Loa', 0.88987315),
('Âu_Lạc thời_kỳ An_Dương_Vương', 0.8680505),
('thành Cổ_Loa', 0.8661723),
('hàng_loạt di_chỉ khảo_cổ_học', 0.8644231),
('lịch_sử xây_dựng thành_luỹ', 0.8375939)]
```
<a name="diversify"/></a>
### 2.3. Diversify Results
More information needed
<a name="limitations"/></a>
## 3. Limitations
More information needed
## References
1. https://github.com/MaartenGr/KeyBERT
2. https://github.com/VinAIResearch/PhoBERT
3. https://huggingface.co/NlpHUST/ner-vietnamese-electra-base
4. https://github.com/undertheseanlp/underthesea
5. https://github.com/vncorenlp/VnCoreNLP
| {"language": ["vi"], "tags": ["keyword-extraction"]} | tpha4308/keyword-extraction-viet | null | [
"keyword-extraction",
"vi",
"region:us"
] | null | 2024-04-18T07:22:40+00:00 | [] | [
"vi"
] | TAGS
#keyword-extraction #vi #region-us
|
# <a name="introduction"></a> KeyBERTVi - Keyword Extraction for Vietnamese language
Inspired by KeyBERT, KeyBERTVi implements a similar keyword extraction technique that leverages the embeddings of PhoBERT and minimal linguistics properties to extract keywords and keyphrases that are most similar to the document.
<a name="toc"/></a>
## Table of Contents
1. About the Project
2. Getting Started
2.1. Installation
2.2. Basic Usage
2.3. Diversify Results
3. Limitations
<a name="about"/></a>
## 1. About the Project
This implementation took inspiration from the simple yet intuitive and powerful method of KeyBERT, applied for the Vietnamese language. PhoBERT are used to generate both document-level embeddings and word-level embeddings for extracted N-grams. Cosine similarity is then used to compute which N-grams are most similar to the document-level embedding, thus can be perceived as most representative of the document.
Preprocessing catered to the Vietnamese language was applied.
Test with your own documents at KeyBERTVi Space.
<a name="gettingstarted"/></a>
## 2. Getting Started
<a name="installation"/></a>
### 2.1. Setting up
You can use existing pre-trained models in the repo or download your own and put them in 'pretrained-models' folder.
Note: 'dir_path' is the absolute path to the repo.
As PhoBERT requires VnCoreNLP as part of pre-processing, the folder 'pretrained-models/vncorenlp' is required. To download your own:
<a name="usage"/></a>
### 2.2. Basic Usage
<a name="diversify"/></a>
### 2.3. Diversify Results
More information needed
<a name="limitations"/></a>
## 3. Limitations
More information needed
## References
1. URL
2. URL
3. URL
4. URL
5. URL
| [
"# <a name=\"introduction\"></a> KeyBERTVi - Keyword Extraction for Vietnamese language\n\nInspired by KeyBERT, KeyBERTVi implements a similar keyword extraction technique that leverages the embeddings of PhoBERT and minimal linguistics properties to extract keywords and keyphrases that are most similar to the document.\n\n<a name=\"toc\"/></a>",
"## Table of Contents \n \n 1. About the Project \n 2. Getting Started \n 2.1. Installation \n 2.2. Basic Usage \n 2.3. Diversify Results \n 3. Limitations \n \n\n<a name=\"about\"/></a>",
"## 1. About the Project\n\nThis implementation took inspiration from the simple yet intuitive and powerful method of KeyBERT, applied for the Vietnamese language. PhoBERT are used to generate both document-level embeddings and word-level embeddings for extracted N-grams. Cosine similarity is then used to compute which N-grams are most similar to the document-level embedding, thus can be perceived as most representative of the document. \nPreprocessing catered to the Vietnamese language was applied. \n\nTest with your own documents at KeyBERTVi Space. \n\n<a name=\"gettingstarted\"/></a>",
"## 2. Getting Started\n<a name=\"installation\"/></a>",
"### 2.1. Setting up\n\n\n\nYou can use existing pre-trained models in the repo or download your own and put them in 'pretrained-models' folder. \n\n\n\nNote: 'dir_path' is the absolute path to the repo. \n\nAs PhoBERT requires VnCoreNLP as part of pre-processing, the folder 'pretrained-models/vncorenlp' is required. To download your own: \n\n\n\n\n<a name=\"usage\"/></a>",
"### 2.2. Basic Usage\n\n\n\n\n\n<a name=\"diversify\"/></a>",
"### 2.3. Diversify Results\n\nMore information needed\n\n<a name=\"limitations\"/></a>",
"## 3. Limitations\n\nMore information needed",
"## References\n1. URL\n2. URL\n3. URL\n4. URL\n5. URL"
] | [
"TAGS\n#keyword-extraction #vi #region-us \n",
"# <a name=\"introduction\"></a> KeyBERTVi - Keyword Extraction for Vietnamese language\n\nInspired by KeyBERT, KeyBERTVi implements a similar keyword extraction technique that leverages the embeddings of PhoBERT and minimal linguistics properties to extract keywords and keyphrases that are most similar to the document.\n\n<a name=\"toc\"/></a>",
"## Table of Contents \n \n 1. About the Project \n 2. Getting Started \n 2.1. Installation \n 2.2. Basic Usage \n 2.3. Diversify Results \n 3. Limitations \n \n\n<a name=\"about\"/></a>",
"## 1. About the Project\n\nThis implementation took inspiration from the simple yet intuitive and powerful method of KeyBERT, applied for the Vietnamese language. PhoBERT are used to generate both document-level embeddings and word-level embeddings for extracted N-grams. Cosine similarity is then used to compute which N-grams are most similar to the document-level embedding, thus can be perceived as most representative of the document. \nPreprocessing catered to the Vietnamese language was applied. \n\nTest with your own documents at KeyBERTVi Space. \n\n<a name=\"gettingstarted\"/></a>",
"## 2. Getting Started\n<a name=\"installation\"/></a>",
"### 2.1. Setting up\n\n\n\nYou can use existing pre-trained models in the repo or download your own and put them in 'pretrained-models' folder. \n\n\n\nNote: 'dir_path' is the absolute path to the repo. \n\nAs PhoBERT requires VnCoreNLP as part of pre-processing, the folder 'pretrained-models/vncorenlp' is required. To download your own: \n\n\n\n\n<a name=\"usage\"/></a>",
"### 2.2. Basic Usage\n\n\n\n\n\n<a name=\"diversify\"/></a>",
"### 2.3. Diversify Results\n\nMore information needed\n\n<a name=\"limitations\"/></a>",
"## 3. Limitations\n\nMore information needed",
"## References\n1. URL\n2. URL\n3. URL\n4. URL\n5. URL"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large_ArLAMA
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.27.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "xlm-roberta-large_ArLAMA", "results": []}]} | AfnanTS/xlm-roberta-large_ArLAMA | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:23:44+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-roberta-large_ArLAMA
This model is a fine-tuned version of xlm-roberta-large on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.27.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
| [
"# xlm-roberta-large_ArLAMA\n\nThis model is a fine-tuned version of xlm-roberta-large on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.27.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-roberta-large_ArLAMA\n\nThis model is a fine-tuned version of xlm-roberta-large on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.27.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3"
] |
null | transformers |
# Alex01837178373/Vikhr-tiny-0.1-Q5_K_M-GGUF
This model was converted to GGUF format from [`Vikhrmodels/Vikhr-tiny-0.1`](https://huggingface.co/Vikhrmodels/Vikhr-tiny-0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vikhrmodels/Vikhr-tiny-0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Alex01837178373/Vikhr-tiny-0.1-Q5_K_M-GGUF --model vikhr-tiny-0.1.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Alex01837178373/Vikhr-tiny-0.1-Q5_K_M-GGUF --model vikhr-tiny-0.1.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m vikhr-tiny-0.1.Q5_K_M.gguf -n 128
```
| {"language": ["ru", "en", "zh"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | Alex01837178373/Vikhr-tiny-0.1-Q5_K_M-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"ru",
"en",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:24:49+00:00 | [] | [
"ru",
"en",
"zh"
] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #ru #en #zh #license-apache-2.0 #endpoints_compatible #region-us
|
# Alex01837178373/Vikhr-tiny-0.1-Q5_K_M-GGUF
This model was converted to GGUF format from 'Vikhrmodels/Vikhr-tiny-0.1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# Alex01837178373/Vikhr-tiny-0.1-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'Vikhrmodels/Vikhr-tiny-0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #ru #en #zh #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Alex01837178373/Vikhr-tiny-0.1-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'Vikhrmodels/Vikhr-tiny-0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/dumbo-krillin51 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:26:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Edgar404/donut-sroie-qlora | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:26:31+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [laszlokiss27/doodle-dash2](https://huggingface.co/laszlokiss27/doodle-dash2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7177
- Accuracy: 0.8121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:------:|:---------------:|:--------:|
| 0.9709 | 0.0256 | 5000 | 0.9170 | 0.7612 |
| 0.9635 | 0.0513 | 10000 | 0.9147 | 0.7623 |
| 0.9518 | 0.0769 | 15000 | 0.9081 | 0.7646 |
| 0.9472 | 0.1026 | 20000 | 0.9044 | 0.7656 |
| 0.9443 | 0.1282 | 25000 | 0.9061 | 0.7660 |
| 0.93 | 0.1538 | 30000 | 0.9071 | 0.7651 |
| 0.9206 | 0.1795 | 35000 | 0.8963 | 0.7680 |
| 0.9214 | 0.2051 | 40000 | 0.8910 | 0.7693 |
| 0.912 | 0.2308 | 45000 | 0.8914 | 0.7687 |
| 0.9113 | 0.2564 | 50000 | 0.8801 | 0.7719 |
| 0.9035 | 0.2820 | 55000 | 0.8803 | 0.7723 |
| 0.9035 | 0.3077 | 60000 | 0.8798 | 0.7717 |
| 0.8898 | 0.3333 | 65000 | 0.8822 | 0.7719 |
| 0.8874 | 0.3590 | 70000 | 0.8703 | 0.7748 |
| 0.8848 | 0.3846 | 75000 | 0.8623 | 0.7764 |
| 0.8817 | 0.4102 | 80000 | 0.8609 | 0.7766 |
| 0.8765 | 0.4359 | 85000 | 0.8599 | 0.7769 |
| 0.8763 | 0.4615 | 90000 | 0.8532 | 0.7787 |
| 0.8714 | 0.4872 | 95000 | 0.8572 | 0.7774 |
| 0.869 | 0.5128 | 100000 | 0.8479 | 0.7796 |
| 0.8672 | 0.5384 | 105000 | 0.8480 | 0.7798 |
| 0.8632 | 0.5641 | 110000 | 0.8520 | 0.7792 |
| 0.8592 | 0.5897 | 115000 | 0.8433 | 0.7811 |
| 0.8607 | 0.6154 | 120000 | 0.8428 | 0.7811 |
| 0.853 | 0.6410 | 125000 | 0.8375 | 0.7827 |
| 0.8541 | 0.6666 | 130000 | 0.8455 | 0.7805 |
| 0.8473 | 0.6923 | 135000 | 0.8330 | 0.7838 |
| 0.8449 | 0.7179 | 140000 | 0.8305 | 0.7838 |
| 0.8465 | 0.7436 | 145000 | 0.8274 | 0.7850 |
| 0.8423 | 0.7692 | 150000 | 0.8325 | 0.7836 |
| 0.8454 | 0.7948 | 155000 | 0.8270 | 0.7849 |
| 0.8358 | 0.8205 | 160000 | 0.8328 | 0.7838 |
| 0.8389 | 0.8461 | 165000 | 0.8209 | 0.7868 |
| 0.8332 | 0.8718 | 170000 | 0.8340 | 0.7834 |
| 0.8357 | 0.8974 | 175000 | 0.8200 | 0.7864 |
| 0.8356 | 0.9230 | 180000 | 0.8162 | 0.7877 |
| 0.835 | 0.9487 | 185000 | 0.8181 | 0.7874 |
| 0.8298 | 0.9743 | 190000 | 0.8180 | 0.7874 |
| 0.8285 | 1.0000 | 195000 | 0.8154 | 0.7878 |
| 0.8138 | 1.0256 | 200000 | 0.8119 | 0.7889 |
| 0.8104 | 1.0512 | 205000 | 0.8087 | 0.7887 |
| 0.8162 | 1.0769 | 210000 | 0.8073 | 0.7895 |
| 0.8122 | 1.1025 | 215000 | 0.8053 | 0.7902 |
| 0.807 | 1.1282 | 220000 | 0.8064 | 0.7900 |
| 0.8114 | 1.1538 | 225000 | 0.8043 | 0.7907 |
| 0.8165 | 1.1794 | 230000 | 0.8042 | 0.7911 |
| 0.8124 | 1.2051 | 235000 | 0.8009 | 0.7910 |
| 0.8092 | 1.2307 | 240000 | 0.8019 | 0.7914 |
| 0.8023 | 1.2564 | 245000 | 0.7979 | 0.7921 |
| 0.8058 | 1.2820 | 250000 | 0.7988 | 0.7922 |
| 0.8057 | 1.3076 | 255000 | 0.7976 | 0.7923 |
| 0.8076 | 1.3333 | 260000 | 0.7976 | 0.7921 |
| 0.805 | 1.3589 | 265000 | 0.7953 | 0.7930 |
| 0.797 | 1.3846 | 270000 | 0.7990 | 0.7926 |
| 0.7997 | 1.4102 | 275000 | 0.7929 | 0.7935 |
| 0.8028 | 1.4358 | 280000 | 0.7933 | 0.7933 |
| 0.7981 | 1.4615 | 285000 | 0.7905 | 0.7934 |
| 0.8002 | 1.4871 | 290000 | 0.7965 | 0.7924 |
| 0.7984 | 1.5128 | 295000 | 0.7915 | 0.7933 |
| 0.7973 | 1.5384 | 300000 | 0.7950 | 0.7932 |
| 0.7933 | 1.5640 | 305000 | 0.7865 | 0.7950 |
| 0.7927 | 1.5897 | 310000 | 0.7886 | 0.7946 |
| 0.799 | 1.6153 | 315000 | 0.7840 | 0.7954 |
| 0.7961 | 1.6410 | 320000 | 0.8132 | 0.7901 |
| 0.7866 | 1.6666 | 325000 | 0.7829 | 0.7958 |
| 0.7898 | 1.6922 | 330000 | 0.7813 | 0.7959 |
| 0.7885 | 1.7179 | 335000 | 0.7796 | 0.7969 |
| 0.7901 | 1.7435 | 340000 | 0.7817 | 0.7958 |
| 0.7916 | 1.7692 | 345000 | 0.7823 | 0.7962 |
| 0.787 | 1.7948 | 350000 | 0.7789 | 0.7969 |
| 0.7822 | 1.8204 | 355000 | 0.7787 | 0.7968 |
| 0.7844 | 1.8461 | 360000 | 0.7754 | 0.7981 |
| 0.7849 | 1.8717 | 365000 | 0.7775 | 0.7972 |
| 0.7845 | 1.8974 | 370000 | 0.7761 | 0.7973 |
| 0.7905 | 1.9230 | 375000 | 0.7736 | 0.7983 |
| 0.788 | 1.9486 | 380000 | 0.7738 | 0.7978 |
| 0.7832 | 1.9743 | 385000 | 0.7719 | 0.7980 |
| 0.7787 | 1.9999 | 390000 | 0.7710 | 0.7986 |
| 0.767 | 2.0256 | 395000 | 0.7717 | 0.7985 |
| 0.7666 | 2.0512 | 400000 | 0.7698 | 0.7989 |
| 0.7631 | 2.0768 | 405000 | 0.7719 | 0.7982 |
| 0.7634 | 2.1025 | 410000 | 0.7684 | 0.7994 |
| 0.7621 | 2.1281 | 415000 | 0.7707 | 0.7987 |
| 0.7694 | 2.1538 | 420000 | 0.7700 | 0.7994 |
| 0.7648 | 2.1794 | 425000 | 0.7678 | 0.7995 |
| 0.7612 | 2.2050 | 430000 | 0.7673 | 0.7995 |
| 0.7627 | 2.2307 | 435000 | 0.7671 | 0.7997 |
| 0.766 | 2.2563 | 440000 | 0.7649 | 0.8003 |
| 0.7635 | 2.2820 | 445000 | 0.7653 | 0.8000 |
| 0.761 | 2.3076 | 450000 | 0.7647 | 0.8000 |
| 0.7649 | 2.3332 | 455000 | 0.7661 | 0.8001 |
| 0.7589 | 2.3589 | 460000 | 0.7630 | 0.8005 |
| 0.7586 | 2.3845 | 465000 | 0.7703 | 0.7988 |
| 0.7595 | 2.4102 | 470000 | 0.7640 | 0.8003 |
| 0.7622 | 2.4358 | 475000 | 0.7627 | 0.8005 |
| 0.7593 | 2.4614 | 480000 | 0.7605 | 0.8013 |
| 0.7558 | 2.4871 | 485000 | 0.7609 | 0.8012 |
| 0.7599 | 2.5127 | 490000 | 0.7651 | 0.8002 |
| 0.7587 | 2.5384 | 495000 | 0.7589 | 0.8016 |
| 0.7588 | 2.5640 | 500000 | 0.7570 | 0.8024 |
| 0.762 | 2.5896 | 505000 | 0.7566 | 0.8020 |
| 0.7526 | 2.6153 | 510000 | 0.7602 | 0.8013 |
| 0.7587 | 2.6409 | 515000 | 0.7560 | 0.8021 |
| 0.7522 | 2.6666 | 520000 | 0.7557 | 0.8026 |
| 0.7546 | 2.6922 | 525000 | 0.7542 | 0.8026 |
| 0.7542 | 2.7178 | 530000 | 0.7543 | 0.8029 |
| 0.7509 | 2.7435 | 535000 | 0.7542 | 0.8029 |
| 0.7515 | 2.7691 | 540000 | 0.7585 | 0.8016 |
| 0.7508 | 2.7948 | 545000 | 0.7553 | 0.8024 |
| 0.7523 | 2.8204 | 550000 | 0.7531 | 0.8028 |
| 0.756 | 2.8460 | 555000 | 0.7511 | 0.8035 |
| 0.7559 | 2.8717 | 560000 | 0.7500 | 0.8038 |
| 0.75 | 2.8973 | 565000 | 0.7494 | 0.8038 |
| 0.7492 | 2.9230 | 570000 | 0.7511 | 0.8035 |
| 0.7481 | 2.9486 | 575000 | 0.7471 | 0.8044 |
| 0.751 | 2.9742 | 580000 | 0.7478 | 0.8043 |
| 0.7545 | 2.9999 | 585000 | 0.7595 | 0.8019 |
| 0.7299 | 3.0255 | 590000 | 0.7478 | 0.8042 |
| 0.7305 | 3.0512 | 595000 | 0.7487 | 0.8047 |
| 0.7343 | 3.0768 | 600000 | 0.7466 | 0.8047 |
| 0.731 | 3.1024 | 605000 | 0.7472 | 0.8045 |
| 0.733 | 3.1281 | 610000 | 0.7460 | 0.8046 |
| 0.7351 | 3.1537 | 615000 | 0.7486 | 0.8043 |
| 0.7372 | 3.1794 | 620000 | 0.7446 | 0.8052 |
| 0.7299 | 3.2050 | 625000 | 0.7478 | 0.8045 |
| 0.7351 | 3.2306 | 630000 | 0.7458 | 0.8047 |
| 0.7304 | 3.2563 | 635000 | 0.7460 | 0.8049 |
| 0.7335 | 3.2819 | 640000 | 0.7451 | 0.8049 |
| 0.7351 | 3.3076 | 645000 | 0.7416 | 0.8058 |
| 0.7324 | 3.3332 | 650000 | 0.7420 | 0.8058 |
| 0.732 | 3.3588 | 655000 | 0.7426 | 0.8057 |
| 0.7286 | 3.3845 | 660000 | 0.7418 | 0.8062 |
| 0.7331 | 3.4101 | 665000 | 0.7420 | 0.8059 |
| 0.729 | 3.4358 | 670000 | 0.7402 | 0.8065 |
| 0.7336 | 3.4614 | 675000 | 0.7409 | 0.8063 |
| 0.7275 | 3.4870 | 680000 | 0.7398 | 0.8064 |
| 0.7298 | 3.5127 | 685000 | 0.7388 | 0.8069 |
| 0.724 | 3.5383 | 690000 | 0.7365 | 0.8070 |
| 0.7266 | 3.5640 | 695000 | 0.7373 | 0.8072 |
| 0.7282 | 3.5896 | 700000 | 0.7371 | 0.8074 |
| 0.7272 | 3.6152 | 705000 | 0.7360 | 0.8073 |
| 0.7227 | 3.6409 | 710000 | 0.7360 | 0.8072 |
| 0.7275 | 3.6665 | 715000 | 0.7358 | 0.8073 |
| 0.7299 | 3.6922 | 720000 | 0.7422 | 0.8063 |
| 0.7363 | 3.7178 | 725000 | 0.7361 | 0.8072 |
| 0.7274 | 3.7434 | 730000 | 0.7334 | 0.8082 |
| 0.7282 | 3.7691 | 735000 | 0.7347 | 0.8081 |
| 0.7239 | 3.7947 | 740000 | 0.7326 | 0.8085 |
| 0.7225 | 3.8204 | 745000 | 0.7352 | 0.8076 |
| 0.7242 | 3.8460 | 750000 | 0.7320 | 0.8086 |
| 0.7291 | 3.8716 | 755000 | 0.7317 | 0.8089 |
| 0.7292 | 3.8973 | 760000 | 0.7310 | 0.8087 |
| 0.7247 | 3.9229 | 765000 | 0.7310 | 0.8083 |
| 0.7286 | 3.9486 | 770000 | 0.7326 | 0.8084 |
| 0.7237 | 3.9742 | 775000 | 0.7303 | 0.8088 |
| 0.7187 | 3.9998 | 780000 | 0.7298 | 0.8090 |
| 0.7077 | 4.0255 | 785000 | 0.7316 | 0.8084 |
| 0.7108 | 4.0511 | 790000 | 0.7316 | 0.8084 |
| 0.7025 | 4.0768 | 795000 | 0.7300 | 0.8093 |
| 0.708 | 4.1024 | 800000 | 0.7295 | 0.8093 |
| 0.7067 | 4.1280 | 805000 | 0.7288 | 0.8094 |
| 0.7123 | 4.1537 | 810000 | 0.7287 | 0.8094 |
| 0.707 | 4.1793 | 815000 | 0.7283 | 0.8095 |
| 0.7033 | 4.2050 | 820000 | 0.7282 | 0.8099 |
| 0.7128 | 4.2306 | 825000 | 0.7272 | 0.8099 |
| 0.7053 | 4.2562 | 830000 | 0.7284 | 0.8095 |
| 0.7097 | 4.2819 | 835000 | 0.7268 | 0.8098 |
| 0.7101 | 4.3075 | 840000 | 0.7267 | 0.8097 |
| 0.7074 | 4.3332 | 845000 | 0.7261 | 0.8102 |
| 0.7034 | 4.3588 | 850000 | 0.7257 | 0.8101 |
| 0.7059 | 4.3844 | 855000 | 0.7262 | 0.8098 |
| 0.7008 | 4.4101 | 860000 | 0.7247 | 0.8100 |
| 0.7021 | 4.4357 | 865000 | 0.7241 | 0.8103 |
| 0.707 | 4.4614 | 870000 | 0.7243 | 0.8105 |
| 0.7034 | 4.4870 | 875000 | 0.7238 | 0.8106 |
| 0.7055 | 4.5126 | 880000 | 0.7233 | 0.8106 |
| 0.7056 | 4.5383 | 885000 | 0.7231 | 0.8107 |
| 0.7029 | 4.5639 | 890000 | 0.7226 | 0.8108 |
| 0.7048 | 4.5896 | 895000 | 0.7224 | 0.8111 |
| 0.7031 | 4.6152 | 900000 | 0.7221 | 0.8110 |
| 0.7034 | 4.6408 | 905000 | 0.7216 | 0.8112 |
| 0.7012 | 4.6665 | 910000 | 0.7218 | 0.8113 |
| 0.702 | 4.6921 | 915000 | 0.7209 | 0.8114 |
| 0.7018 | 4.7178 | 920000 | 0.7207 | 0.8115 |
| 0.7056 | 4.7434 | 925000 | 0.7201 | 0.8116 |
| 0.7005 | 4.7690 | 930000 | 0.7199 | 0.8118 |
| 0.7005 | 4.7947 | 935000 | 0.7197 | 0.8117 |
| 0.708 | 4.8203 | 940000 | 0.7189 | 0.8117 |
| 0.6956 | 4.8460 | 945000 | 0.7190 | 0.8118 |
| 0.7074 | 4.8716 | 950000 | 0.7185 | 0.8120 |
| 0.6964 | 4.8972 | 955000 | 0.7184 | 0.8121 |
| 0.7048 | 4.9229 | 960000 | 0.7188 | 0.8120 |
| 0.7018 | 4.9485 | 965000 | 0.7178 | 0.8122 |
| 0.7006 | 4.9742 | 970000 | 0.7177 | 0.8121 |
| 0.7005 | 4.9998 | 975000 | 0.7177 | 0.8121 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "laszlokiss27/doodle-dash2", "model-index": [{"name": "results", "results": []}]} | laszlokiss27/results | null | [
"transformers",
"pytorch",
"safetensors",
"mobilevitv2",
"image-classification",
"generated_from_trainer",
"base_model:laszlokiss27/doodle-dash2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:27:37+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #mobilevitv2 #image-classification #generated_from_trainer #base_model-laszlokiss27/doodle-dash2 #autotrain_compatible #endpoints_compatible #region-us
| results
=======
This model is a fine-tuned version of laszlokiss27/doodle-dash2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7177
* Accuracy: 0.8121
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0008
* train\_batch\_size: 256
* eval\_batch\_size: 256
* seed: 42
* distributed\_type: multi-GPU
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.2+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0008\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #pytorch #safetensors #mobilevitv2 #image-classification #generated_from_trainer #base_model-laszlokiss27/doodle-dash2 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0008\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [coffie3/0x6](https://huggingface.co/coffie3/0x6)
* [lxsure/Sniper_28](https://huggingface.co/lxsure/Sniper_28)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: coffie3/0x6
layer_range: [0, 24]
- model: lxsure/Sniper_28
layer_range: [0, 24]
merge_method: slerp
base_model: lxsure/Sniper_28
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.3
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["coffie3/0x6", "lxsure/Sniper_28"]} | Sumail/Ame12 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:coffie3/0x6",
"base_model:lxsure/Sniper_28",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:28:30+00:00 | [] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-coffie3/0x6 #base_model-lxsure/Sniper_28 #autotrain_compatible #endpoints_compatible #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* coffie3/0x6
* lxsure/Sniper_28
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* coffie3/0x6\n* lxsure/Sniper_28",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-coffie3/0x6 #base_model-lxsure/Sniper_28 #autotrain_compatible #endpoints_compatible #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* coffie3/0x6\n* lxsure/Sniper_28",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers | # Alsebay/Nutopia-7B AWQ
- Model creator: [Alsebay](https://huggingface.co/Alsebay)
- Original model: [Nutopia-7B](https://huggingface.co/Alsebay/Nutopia-7B)
## Model Summary
Testing purpose only, seem it not good in Roleplaying 😢
This model was merged using the SLERP merge method.
The following models were included in the merge:
* [NurtureAI/neural-chat-7b-v3-1-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-1-16k)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
| {"library_name": "transformers", "tags": ["mergekit", "merge", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "base_model": ["NurtureAI/neural-chat-7b-v3-1-16k", "NousResearch/Hermes-2-Pro-Mistral-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Nutopia-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"base_model:NurtureAI/neural-chat-7b-v3-1-16k",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:34:42+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #4-bit #AWQ #autotrain_compatible #endpoints_compatible #base_model-NurtureAI/neural-chat-7b-v3-1-16k #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #text-generation-inference #region-us
| # Alsebay/Nutopia-7B AWQ
- Model creator: Alsebay
- Original model: Nutopia-7B
## Model Summary
Testing purpose only, seem it not good in Roleplaying
This model was merged using the SLERP merge method.
The following models were included in the merge:
* NurtureAI/neural-chat-7b-v3-1-16k
* NousResearch/Hermes-2-Pro-Mistral-7B
| [
"# Alsebay/Nutopia-7B AWQ\n\n- Model creator: Alsebay\n- Original model: Nutopia-7B",
"## Model Summary\n\nTesting purpose only, seem it not good in Roleplaying \n\nThis model was merged using the SLERP merge method.\n\nThe following models were included in the merge:\n* NurtureAI/neural-chat-7b-v3-1-16k\n* NousResearch/Hermes-2-Pro-Mistral-7B"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #4-bit #AWQ #autotrain_compatible #endpoints_compatible #base_model-NurtureAI/neural-chat-7b-v3-1-16k #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #text-generation-inference #region-us \n",
"# Alsebay/Nutopia-7B AWQ\n\n- Model creator: Alsebay\n- Original model: Nutopia-7B",
"## Model Summary\n\nTesting purpose only, seem it not good in Roleplaying \n\nThis model was merged using the SLERP merge method.\n\nThe following models were included in the merge:\n* NurtureAI/neural-chat-7b-v3-1-16k\n* NousResearch/Hermes-2-Pro-Mistral-7B"
] |
text-generation | transformers |
# WestLakeLaser-12B-MoE
WestLakeLaser-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/PrometheusLaser-7B-slerp](https://huggingface.co/allknowingroger/PrometheusLaser-7B-slerp)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
## 🧩 Configuration
```yaml
base_model: allknowingroger/PrometheusLaser-7B-slerp
experts:
- source_model: allknowingroger/PrometheusLaser-7B-slerp
positive_prompts: ["what"]
- source_model: senseable/WestLake-7B-v2
positive_prompts: ["why"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/WestLakeLaser-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/PrometheusLaser-7B-slerp", "senseable/WestLake-7B-v2"], "base_model": ["allknowingroger/PrometheusLaser-7B-slerp", "senseable/WestLake-7B-v2"]} | allknowingroger/WestLakeLaser-12B-MoE | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/PrometheusLaser-7B-slerp",
"senseable/WestLake-7B-v2",
"base_model:allknowingroger/PrometheusLaser-7B-slerp",
"base_model:senseable/WestLake-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:35:40+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/PrometheusLaser-7B-slerp #senseable/WestLake-7B-v2 #base_model-allknowingroger/PrometheusLaser-7B-slerp #base_model-senseable/WestLake-7B-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# WestLakeLaser-12B-MoE
WestLakeLaser-12B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:
* allknowingroger/PrometheusLaser-7B-slerp
* senseable/WestLake-7B-v2
## Configuration
## Usage
| [
"# WestLakeLaser-12B-MoE\n\nWestLakeLaser-12B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/PrometheusLaser-7B-slerp\n* senseable/WestLake-7B-v2",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/PrometheusLaser-7B-slerp #senseable/WestLake-7B-v2 #base_model-allknowingroger/PrometheusLaser-7B-slerp #base_model-senseable/WestLake-7B-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# WestLakeLaser-12B-MoE\n\nWestLakeLaser-12B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/PrometheusLaser-7B-slerp\n* senseable/WestLake-7B-v2",
"## Configuration",
"## Usage"
] |
text-generation | transformers | > [!Important]
> Still in experiment
# About this model
Remake [version 2](https://huggingface.co/Alsebay/NarumashiRTS-V2) with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )
- **Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)**
- **Finetuned from model :** SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)
## GGUF version? [Here](https://huggingface.co/mradermacher/NarumashiRTS-7B-V2-1-GGUF). Thank you, mradermacher!
## V2 have more epochs.
## Dataset
```
Dataset(all are novels):
30% skinsuit
30% possession
35% transform(shapeshift)
5% other
```
# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft", "Roleplay", "roleplay"], "base_model": "SanjiWatsuki/Kunoichi-DPO-v2-7B"} | Alsebay/NarumashiRTS-7B-V2-1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"Roleplay",
"roleplay",
"en",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:36:42+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #Roleplay #roleplay #en #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
| > [!Important]
> Still in experiment
# About this model
Remake version 2 with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )
- Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)
- Finetuned from model : SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)
## GGUF version? Here. Thank you, mradermacher!
## V2 have more epochs.
## Dataset
# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# About this model\n\nRemake version 2 with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )\n\n- Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)\n- Finetuned from model : SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)",
"## GGUF version? Here. Thank you, mradermacher!",
"## V2 have more epochs.",
"## Dataset",
"# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #Roleplay #roleplay #en #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# About this model\n\nRemake version 2 with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )\n\n- Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)\n- Finetuned from model : SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)",
"## GGUF version? Here. Thank you, mradermacher!",
"## V2 have more epochs.",
"## Dataset",
"# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
fill-mask | transformers |
# Model Card for KartonBERT_base_cased_v1
This is a classic Polish BERT model, trained with MLM task.
It comes with a custom ~38k-tokens BWPT tokenizer. While not ideal,
it performs well on certain downstream tasks and serves as a checkpoint in my work.
## Model Description
- **Developed by:** Bartłomiej Orlik, https://www.linkedin.com/in/bartłomiej-orlik/
- **Model type:** pretrained BERT base cased (~38k tokenizer)
- **Language:** Polish
- **License:** GPL-3.0
## How to use model for fill-mask task
Use the code below to get started with the model.
```python
from transformers import pipeline
tokenizer_kwargs={'truncation': True, 'max_length': 512}
model = pipeline('fill-mask', model='OrlikB/KartonBERT_base_cased_v1', tokenizer_kwargs=tokenizer_kwargs)
model("Kartony to inaczej [MASK], które produkowane są z tektury.")
# Output
[{'score': 0.14289526641368866,
'token': 13141,
'token_str': 'opakowania',
'sequence': 'Kartony to inaczej opakowania, które produkowane są z tektury.'},
{'score': 0.13409359753131866,
'token': 23447,
'token_str': 'pudełka',
'sequence': 'Kartony to inaczej pudełka, które produkowane są z tektury.'},
{'score': 0.11648454517126083,
'token': 2879,
'token_str': 'produkty',
'sequence': 'Kartony to inaczej produkty, które produkowane są z tektury.'},
{'score': 0.06563600897789001,
'token': 10929,
'token_str': 'przedmioty',
'sequence': 'Kartony to inaczej przedmioty, które produkowane są z tektury.'},
{'score': 0.028728993609547615,
'token': 35869,
'token_str': 'pojemniki',
'sequence': 'Kartony to inaczej pojemniki, które produkowane są z tektury.'}]
```
| {"language": ["pl"], "license": "gpl-3.0", "pipeline_tag": "fill-mask", "widget": [{"text": "Kartony to inaczej [MASK], kt\u00f3re produkowane s\u0105 z tektury."}]} | OrlikB/KartonBERT_base_cased_v1 | null | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"pl",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:38:50+00:00 | [] | [
"pl"
] | TAGS
#transformers #safetensors #bert #fill-mask #pl #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for KartonBERT_base_cased_v1
This is a classic Polish BERT model, trained with MLM task.
It comes with a custom ~38k-tokens BWPT tokenizer. While not ideal,
it performs well on certain downstream tasks and serves as a checkpoint in my work.
## Model Description
- Developed by: Bartłomiej Orlik, URL/bartłomiej-orlik/
- Model type: pretrained BERT base cased (~38k tokenizer)
- Language: Polish
- License: GPL-3.0
## How to use model for fill-mask task
Use the code below to get started with the model.
| [
"# Model Card for KartonBERT_base_cased_v1\n\n\nThis is a classic Polish BERT model, trained with MLM task. \nIt comes with a custom ~38k-tokens BWPT tokenizer. While not ideal, \nit performs well on certain downstream tasks and serves as a checkpoint in my work.",
"## Model Description\n\n\n- Developed by: Bartłomiej Orlik, URL/bartłomiej-orlik/\n- Model type: pretrained BERT base cased (~38k tokenizer)\n- Language: Polish\n- License: GPL-3.0",
"## How to use model for fill-mask task\n\nUse the code below to get started with the model."
] | [
"TAGS\n#transformers #safetensors #bert #fill-mask #pl #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for KartonBERT_base_cased_v1\n\n\nThis is a classic Polish BERT model, trained with MLM task. \nIt comes with a custom ~38k-tokens BWPT tokenizer. While not ideal, \nit performs well on certain downstream tasks and serves as a checkpoint in my work.",
"## Model Description\n\n\n- Developed by: Bartłomiej Orlik, URL/bartłomiej-orlik/\n- Model type: pretrained BERT base cased (~38k tokenizer)\n- Language: Polish\n- License: GPL-3.0",
"## How to use model for fill-mask task\n\nUse the code below to get started with the model."
] |
text-generation | transformers | # Alsebay/RainyMotip-2x7B AWQ
- Model creator: [Alsebay](https://huggingface.co/Alsebay)
- Original model: [RainyMotip-2x7B](https://huggingface.co/Alsebay/RainyMotip-2x7B)
## Model Summary
What is it? A 2x7B MoE model for Roleplay(?).
You will occur GPT-like responses sometimes, just skip it and reroll (gacha time). Overall, I think it good enough for Roleplaying.
You may want see this: https://huggingface.co/Alsebay/My_LLMs_Leaderboard
This model is is a Mixure of Experts (MoE) made with the following models:
- udkai/Turdus
- Kquant03/Samlagast-7B-laser-bf16
If you used it, please let me know if it good or not. Thank you :)
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "moe", "merge"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/RainyMotip-2x7B-AWQ | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"moe",
"merge",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:39:28+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #moe #merge #license-apache-2.0 #text-generation-inference #region-us
| # Alsebay/RainyMotip-2x7B AWQ
- Model creator: Alsebay
- Original model: RainyMotip-2x7B
## Model Summary
What is it? A 2x7B MoE model for Roleplay(?).
You will occur GPT-like responses sometimes, just skip it and reroll (gacha time). Overall, I think it good enough for Roleplaying.
You may want see this: URL
This model is is a Mixure of Experts (MoE) made with the following models:
- udkai/Turdus
- Kquant03/Samlagast-7B-laser-bf16
If you used it, please let me know if it good or not. Thank you :)
| [
"# Alsebay/RainyMotip-2x7B AWQ\n\n- Model creator: Alsebay\n- Original model: RainyMotip-2x7B",
"## Model Summary\n\nWhat is it? A 2x7B MoE model for Roleplay(?).\n\nYou will occur GPT-like responses sometimes, just skip it and reroll (gacha time). Overall, I think it good enough for Roleplaying.\n\nYou may want see this: URL\n\nThis model is is a Mixure of Experts (MoE) made with the following models:\n\n- udkai/Turdus\n- Kquant03/Samlagast-7B-laser-bf16\n\nIf you used it, please let me know if it good or not. Thank you :)"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #moe #merge #license-apache-2.0 #text-generation-inference #region-us \n",
"# Alsebay/RainyMotip-2x7B AWQ\n\n- Model creator: Alsebay\n- Original model: RainyMotip-2x7B",
"## Model Summary\n\nWhat is it? A 2x7B MoE model for Roleplay(?).\n\nYou will occur GPT-like responses sometimes, just skip it and reroll (gacha time). Overall, I think it good enough for Roleplaying.\n\nYou may want see this: URL\n\nThis model is is a Mixure of Experts (MoE) made with the following models:\n\n- udkai/Turdus\n- Kquant03/Samlagast-7B-laser-bf16\n\nIf you used it, please let me know if it good or not. Thank you :)"
] |
text-generation | transformers | ORIGIGNAL MODEL LINK https://huggingface.co/ParasiticRogue/Merged-RP-Stew-V2-34B
exl2 4 bits.
# Merged-Vicuna-RP-Stew-51B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
New pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.
Big thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!
### Settings
Temperature @ 0.93
Min-P @ 0.02
Typical-P @ 0.9
Repetition Penalty @ 1.07
Repetition Range @ 2048
Smoothing Factor @ 0.39
Smoothing Curve @ 2
Everything else @ off
Early Stopping = X
Do Sample = ✓
Add BOS Token = X
Ban EOS Token = ✓
Skip Special Tokens = ✓
Temperature Last = ✓
Custom Stopping Strings: "< / s >" (<---without spaces)
However for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.
---
You are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!
<10 CHAT COMMANDMENTS>
* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.
* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.
* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.
* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.
* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.
* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.
* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.
* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.
* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.
* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.
---
Fun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):
...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').
It doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.
For settings that are more *in depth* try this:
https://huggingface.co/ParasiticRogue/Merged-RP-Stew-V2-34B-exl2-4.65/discussions/1?not-for-all-audiences=true
### Prompt Format: Chat-Vicuna
```
SYSTEM:
{system_prompt}<|im_end|>
USER:
{prompt}<|im_end|>
ASSISTANT:
{output}<|im_end|>
```
Yes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.
### Models Merged
The following models were included in the merge:
https://huggingface.co/NousResearch/Nous-Capybara-34B
https://huggingface.co/migtissera/Tess-34B-v1.5b
https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2
https://huggingface.co/maywell/PiVoT-SUS-RP
https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama
https://huggingface.co/NeverSleep/CausalLM-RP-34B
https://huggingface.co/chargoddard/Yi-34B-200K-Llama | {"license": "other", "tags": ["merge", "roleplay", "exl2", "not-for-all-audiences"], "license_name": "yi-34b", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE"} | Kotokin/Merged-RP-Stew-V2-51B-exl2-4bpw | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"roleplay",
"exl2",
"not-for-all-audiences",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-18T07:39:52+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #roleplay #exl2 #not-for-all-audiences #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| ORIGIGNAL MODEL LINK URL
exl2 4 bits.
# Merged-Vicuna-RP-Stew-51B
This is a merge of pre-trained language models created using mergekit.
## Merge Details
New pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.
Big thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!
### Settings
Temperature @ 0.93
Min-P @ 0.02
Typical-P @ 0.9
Repetition Penalty @ 1.07
Repetition Range @ 2048
Smoothing Factor @ 0.39
Smoothing Curve @ 2
Everything else @ off
Early Stopping = X
Do Sample =
Add BOS Token = X
Ban EOS Token =
Skip Special Tokens =
Temperature Last =
Custom Stopping Strings: "< / s >" (<---without spaces)
However for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.
---
You are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!
<10 CHAT COMMANDMENTS>
* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.
* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.
* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.
* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.
* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.
* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.
* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.
* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.
* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.
* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.
---
Fun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):
...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').
It doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.
For settings that are more *in depth* try this:
URL
### Prompt Format: Chat-Vicuna
Yes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.
### Models Merged
The following models were included in the merge:
URL
URL
URL
URL
URL
URL
URL | [
"# Merged-Vicuna-RP-Stew-51B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details\n\nNew pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.\n\nBig thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!",
"### Settings\n\nTemperature @ 0.93\n\nMin-P @ 0.02\n\nTypical-P @ 0.9\n\nRepetition Penalty @ 1.07\n\nRepetition Range @ 2048\n\nSmoothing Factor @ 0.39\n\nSmoothing Curve @ 2\n\nEverything else @ off\n\nEarly Stopping = X\n\nDo Sample = \n\nAdd BOS Token = X\n\nBan EOS Token = \n\nSkip Special Tokens = \n\nTemperature Last = \n\nCustom Stopping Strings: \"< / s >\" (<---without spaces)\n\nHowever for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.\n\n---\n\nYou are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!\n\n<10 CHAT COMMANDMENTS>\n* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.\n* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.\n* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.\n* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.\n* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.\n* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.\n* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.\n* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.\n* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.\n* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.\n\n---\nFun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):\n\n...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').\n\nIt doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.\n\nFor settings that are more *in depth* try this:\n\nURL",
"### Prompt Format: Chat-Vicuna\n\n\n\nYes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.",
"### Models Merged\n\nThe following models were included in the merge:\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #roleplay #exl2 #not-for-all-audiences #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Merged-Vicuna-RP-Stew-51B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details\n\nNew pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.\n\nBig thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!",
"### Settings\n\nTemperature @ 0.93\n\nMin-P @ 0.02\n\nTypical-P @ 0.9\n\nRepetition Penalty @ 1.07\n\nRepetition Range @ 2048\n\nSmoothing Factor @ 0.39\n\nSmoothing Curve @ 2\n\nEverything else @ off\n\nEarly Stopping = X\n\nDo Sample = \n\nAdd BOS Token = X\n\nBan EOS Token = \n\nSkip Special Tokens = \n\nTemperature Last = \n\nCustom Stopping Strings: \"< / s >\" (<---without spaces)\n\nHowever for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.\n\n---\n\nYou are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!\n\n<10 CHAT COMMANDMENTS>\n* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.\n* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.\n* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.\n* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.\n* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.\n* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.\n* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.\n* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.\n* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.\n* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.\n\n---\nFun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):\n\n...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').\n\nIt doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.\n\nFor settings that are more *in depth* try this:\n\nURL",
"### Prompt Format: Chat-Vicuna\n\n\n\nYes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.",
"### Models Merged\n\nThe following models were included in the merge:\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-1713422427
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8646 | 0.17 | 1 | 1.7810 |
| 1.7688 | 0.33 | 2 | 1.7576 |
| 1.8047 | 0.5 | 3 | 1.7373 |
| 1.6987 | 0.67 | 4 | 1.7224 |
| 1.7796 | 0.83 | 5 | 1.7142 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.1", "model-index": [{"name": "mistral-1713422427", "results": []}]} | abdullahfurquan/mistral-1713422427 | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T07:42:28+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us
| mistral-1713422427
==================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7142
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 0.03
* training\_steps: 5
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 5",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 5",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [coffie3/0x6](https://huggingface.co/coffie3/0x6)
* [tomaszki/stablelm-37](https://huggingface.co/tomaszki/stablelm-37)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: coffie3/0x6
layer_range: [0, 24]
- model: tomaszki/stablelm-37
layer_range: [0, 24]
merge_method: slerp
base_model: tomaszki/stablelm-37
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["coffie3/0x6", "tomaszki/stablelm-37"]} | Sumail/Ame13 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:coffie3/0x6",
"base_model:tomaszki/stablelm-37",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:44:25+00:00 | [] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-coffie3/0x6 #base_model-tomaszki/stablelm-37 #autotrain_compatible #endpoints_compatible #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* coffie3/0x6
* tomaszki/stablelm-37
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* coffie3/0x6\n* tomaszki/stablelm-37",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-coffie3/0x6 #base_model-tomaszki/stablelm-37 #autotrain_compatible #endpoints_compatible #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* coffie3/0x6\n* tomaszki/stablelm-37",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | EinsZwo/nlid_mlm_pretrain-fullset-sanitysaveaftertrain | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:44:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
SOLAR 10.7B model fine-tuned for 1 epoch on Dataricks instruction tuning dataset.
## Model Details
### Model Description
- **Developed by:** Andrew Chahnwoo Park
- **Model type:** [SOLAR](https://arxiv.org/abs/2312.15166)
- **Language(s) (NLP):** English
- **License:** apache-2.0
- **Finetuned from model:** [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
### Mistral Repository
- **Repository:** [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
## Training Details
### Training Data
- [databricks/databricks-dolly-15k]('https://huggingface.co/datasets/databricks/databricks-dolly-15k')
### Training Procedure
- Quantized Low-Rank Adaptation (QLoRA)
- Transformers Trainer
- DataCollatorForSeq2Seq
- Distributed Data Parallel (DDP) across two GPUs
#### Preprocessing
Manually created tokenized 'labels' for the dataset.
Prompt template utilized basic template for instruction-tuning
### Hardware
Performed fine-tuning with 2 * A100 GPUs
- Provided by Gnewsoft during work period
Model and dataset are too large for free run sessions on Google Colab
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["databricks/databricks-dolly-15k"], "pipeline_tag": "text-generation"} | Chahnwoo/SOLAR-10.7B-v1.0-1E-QLoRA-SFT-Test | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"arxiv:2312.15166",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:45:28+00:00 | [
"2312.15166"
] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #en #dataset-databricks/databricks-dolly-15k #arxiv-2312.15166 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
SOLAR 10.7B model fine-tuned for 1 epoch on Dataricks instruction tuning dataset.
## Model Details
### Model Description
- Developed by: Andrew Chahnwoo Park
- Model type: SOLAR
- Language(s) (NLP): English
- License: apache-2.0
- Finetuned from model: upstage/SOLAR-10.7B-v1.0
### Mistral Repository
- Repository: upstage/SOLAR-10.7B-v1.0
## Training Details
### Training Data
- databricks/databricks-dolly-15k
### Training Procedure
- Quantized Low-Rank Adaptation (QLoRA)
- Transformers Trainer
- DataCollatorForSeq2Seq
- Distributed Data Parallel (DDP) across two GPUs
#### Preprocessing
Manually created tokenized 'labels' for the dataset.
Prompt template utilized basic template for instruction-tuning
### Hardware
Performed fine-tuning with 2 * A100 GPUs
- Provided by Gnewsoft during work period
Model and dataset are too large for free run sessions on Google Colab
| [
"# Model Card for Model ID\n\nSOLAR 10.7B model fine-tuned for 1 epoch on Dataricks instruction tuning dataset.",
"## Model Details",
"### Model Description\n\n- Developed by: Andrew Chahnwoo Park\n- Model type: SOLAR\n- Language(s) (NLP): English\n- License: apache-2.0\n- Finetuned from model: upstage/SOLAR-10.7B-v1.0",
"### Mistral Repository\n\n- Repository: upstage/SOLAR-10.7B-v1.0",
"## Training Details",
"### Training Data\n\n- databricks/databricks-dolly-15k",
"### Training Procedure\n\n- Quantized Low-Rank Adaptation (QLoRA)\n- Transformers Trainer\n- DataCollatorForSeq2Seq\n- Distributed Data Parallel (DDP) across two GPUs",
"#### Preprocessing\n\nManually created tokenized 'labels' for the dataset.\nPrompt template utilized basic template for instruction-tuning",
"### Hardware\n\nPerformed fine-tuning with 2 * A100 GPUs\n- Provided by Gnewsoft during work period\nModel and dataset are too large for free run sessions on Google Colab"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #en #dataset-databricks/databricks-dolly-15k #arxiv-2312.15166 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID\n\nSOLAR 10.7B model fine-tuned for 1 epoch on Dataricks instruction tuning dataset.",
"## Model Details",
"### Model Description\n\n- Developed by: Andrew Chahnwoo Park\n- Model type: SOLAR\n- Language(s) (NLP): English\n- License: apache-2.0\n- Finetuned from model: upstage/SOLAR-10.7B-v1.0",
"### Mistral Repository\n\n- Repository: upstage/SOLAR-10.7B-v1.0",
"## Training Details",
"### Training Data\n\n- databricks/databricks-dolly-15k",
"### Training Procedure\n\n- Quantized Low-Rank Adaptation (QLoRA)\n- Transformers Trainer\n- DataCollatorForSeq2Seq\n- Distributed Data Parallel (DDP) across two GPUs",
"#### Preprocessing\n\nManually created tokenized 'labels' for the dataset.\nPrompt template utilized basic template for instruction-tuning",
"### Hardware\n\nPerformed fine-tuning with 2 * A100 GPUs\n- Provided by Gnewsoft during work period\nModel and dataset are too large for free run sessions on Google Colab"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-360M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.4647 | 0.98 | 7 | 8.5154 |
| 7.2112 | 1.96 | 14 | 7.6819 |
| 6.3283 | 2.95 | 21 | 6.9987 |
| 5.5163 | 3.93 | 28 | 6.4019 |
| 4.7022 | 4.91 | 35 | 5.8715 |
| 3.7692 | 5.89 | 42 | 5.4877 |
| 3.2137 | 6.88 | 49 | 5.2686 |
| 2.6388 | 8.0 | 57 | 5.1854 |
| 2.0768 | 8.98 | 64 | 5.1622 |
| 1.715 | 9.82 | 70 | 5.1685 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "Llama-360M", "results": []}]} | ninagroot/Llama-360M-RUN2 | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:46:37+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #llama #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Llama-360M
==========
This model is a fine-tuned version of [](URL on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 5.1685
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 50
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.1
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_ablation_iter_3
This model is a fine-tuned version of [ShenaoZ/0.0_ablation_iter_2](https://huggingface.co/ShenaoZ/0.0_ablation_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0_ablation_iter_2", "model-index": [{"name": "0.0_ablation_iter_3", "results": []}]} | ShenaoZ/00_beta01 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0_ablation_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:46:50+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0_ablation_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.0_ablation_iter_3
This model is a fine-tuned version of ShenaoZ/0.0_ablation_iter_2 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.0_ablation_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.0_ablation_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0_ablation_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.0_ablation_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.0_ablation_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | piercemaloney/llemma-7b-v5-finetuned | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:47:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers | # Lize Helesta - NIJISANJI
<Gallery />
## Model description
Lize Helesta From Nijisanji!
Trained on 6 outfits, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.
Works well with 0.7-1.0 weight
## Trigger words
Debut Outfit: `lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, blue thighhighs, lace-up boots`
Second Outfit: `lize2st, hair ornament, hair flower, blue flower, sun hat, off-shoulder dress, bare shoulders, sandals`
Third Outfit: `lize3st, tiara, earrings, blue dress, white gloves, fur-trimmed cloak, white cloak`
Fourth Outfit: `lize4st, hair ornament, blue ribbon, school uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan, kneehighs, socks, glasses`
Fifth Outfit: `lize5st, baseball cap, jewelry, black choker, belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top, black shorts, short shorts, socks, sneakers`
Valkyrie Outfit: `lizevlk, hair ornament, armor, boots, navel, thighhighs`
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shalie/LizeHelestaPonyXL/tree/main) them in the Files & versions tab.
### License
This LoRA model is provided under the [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license.
## Restrictions:
- **Usage in Generation Services**: You are not allowed to use the model in any generation services without proper permission from the original creator.
- **Commercial Usage**: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator. | {"license": "other", "tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white skirt, white jacket, frills", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04844-2870709108-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament, blue ski.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, gigantic breasts, wide hips, <lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, blue thighhighs, lace-up boots, <lora:suruga-Style-PonyXL-DoRA-v1.1:1>", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04881-410302042-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, gigantic breasts, wide hips, _lora_splizeHelestaXLPony_1_ lize.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lizevlk, hair ornament, armor, boots, navel, thighhighs", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04880-602300793-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lizevlk, hair ornament, armor, b.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lizevlk, hair ornament, armor, boots, navel, thighhighs", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04879-37165688-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lizevlk, hair ornament, armor, b.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize5st, baseball cap, jewelry, black choker, belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top, black shorts, short shorts, socks, sneakers", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04877-2448642232-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize5st, baseball cap, jewelry,.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize5st, baseball cap, jewelry, black choker, belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top, black shorts, short shorts, socks, sneakers, arms at sides, expressionless, eye contact, leaning forward, looking at another, profile, solo, standing, balloon", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04875-1111574645-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize5st, baseball cap, jewelry,.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize5st, baseball cap, jewelry, black choker, belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top, black shorts, short shorts, socks, sneakers, hands up, head rest, parted lips, solo, blue sky, cloud, cloudy sky, dutch angle, fox shadow puppet, outdoors, sky, statue, sunset, torii", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04874-1862930065-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize5st, baseball cap, jewelry,.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize5st, baseball cap, jewelry, black choker, belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top, black shorts, short shorts, socks, sneakers, :o, blush, hands up, leaning forward, looking at viewer, open mouth, solo, full body, lighthouse, umbrella, waves, white background", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04872-2827486028-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize5st, baseball cap, jewelry,.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize4st, hair ornament, blue ribbon, school uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan, kneehighs, socks, glasses, blush, closed mouth, hands up, looking at viewer, smile, solo, bird, blue theme, cloud, day, dutch angle, outdoors, railing, sky", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04870-2015792205-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize4st, hair ornament, blue rib.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize4st, hair ornament, blue ribbon, school uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan, kneehighs, socks, glasses, :|, closed mouth, crossed arms, head tilt, holding, holding animal, holding cat, looking at viewer, solo, standing, cabbage, cowboy shot, food, groceries, indoors, jirai kei, meat, shopping, shopping cart, spring onion, supermarket", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04866-959972789-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize4st, hair ornament, blue rib.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize4st, hair ornament, blue ribbon, school uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan, kneehighs, socks, glasses, english text, looking at viewer, object hug, solo, squatting", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04863-2919052021-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize4st, hair ornament, blue rib.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize4st, hair ornament, blue ribbon, school uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan, kneehighs, socks, glasses, cowboy shot, grey background, simple background, closed mouth, crying, crying with eyes open, hands on own face, looking at viewer, solo, tears", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04860-3159198852-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize4st, hair ornament, blue rib.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize3st, tiara, earrings, blue dress, white gloves, fur-trimmed cloak, white cloak, grass, outdoors, school, signature, blush, closed mouth, looking at viewer, smile, solo, standing", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04858-2415720447-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize3st, tiara, earrings, blue d.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize3st, tiara, earrings, blue dress, white gloves, fur-trimmed cloak, white cloak, close-up, flower, painting (medium), portrait, simple background, traditional media, watercolor (medium), :d, blush, looking at viewer, open mouth, smile, solo, standing, standing on one leg", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04857-3581127766-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize3st, tiara, earrings, blue d.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize3st, tiara, earrings, blue dress, white gloves, fur-trimmed cloak, white cloak, food, outdoors, roasted sweet potato, signature, sunset, upper body, cigarette, hand up, holding, holding cigarette, looking away, smoking, solo", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04854-1405522214-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize3st, tiara, earrings, blue d.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize2st, hair ornament, hair flower, blue flower, sun hat, off-shoulder dress, bare shoulders, sandals, artist name, from side, green theme, leaf, light particles, nature, outdoors, plant, sparkle, wading, water, wide shot, blush, closed mouth, knees up, sitting, solo", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04851-1524963148-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize2st, hair ornament, hair flo.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize2st, hair ornament, hair flower, blue flower, sun hat, off-shoulder dress, bare shoulders, sandals, bird, birdcage, black background, cage, floating hair, flower, pink flower, portrait, white flower, closed mouth, holding, holding hair, looking at viewer, sitting, smile, solo, wariza", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04850-1878376897-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize2st, hair ornament, hair flo.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize2st, hair ornament, hair flower, blue flower, sun hat, off-shoulder dress, bare shoulders, sandals, blue background, border, dated, feet out of frame, outside border, signature, simple background, split mouth, white border, blush, hands up, looking at viewer, solo", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04849-2290115010-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize2st, hair ornament, hair flo.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, beach, bone, from side, full body, horizon, ocean, outdoors, pillar, plant, ruins, string of flags, vines, wading, blush, hands up, holding, looking at viewer, parted lips, smile, solo", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04848-3524184601-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament, blue ski.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, book, bookshelf, handheld game console, indoors, nintendo switch, arm support, hand up, parted lips, sitting, solo", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04847-512832944-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament, blue ski.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, blue thighhighs, lace-up boots", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04842-3427279500-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament, blue ski.png"}}, {"text": "score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, blue thighhighs, lace-up boots", "parameters": {"negative_prompt": "worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing, fewer digits, extra digits, greyscale, monochrome, source_pony, source_furry"}, "output": {"url": "images/04839-2880076457-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament, blue ski.png"}}], "base_model": "AstraliteHeart/pony-diffusion-v6", "license_name": "faipl-1.0-sd", "license_link": "https://freedevproject.org/faipl-1.0-sd/"} | Shalie/LizeHelestaPonyXL | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:AstraliteHeart/pony-diffusion-v6",
"license:other",
"region:us"
] | null | 2024-04-18T07:49:16+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-AstraliteHeart/pony-diffusion-v6 #license-other #region-us
| # Lize Helesta - NIJISANJI
<Gallery />
## Model description
Lize Helesta From Nijisanji!
Trained on 6 outfits, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.
Works well with 0.7-1.0 weight
## Trigger words
Debut Outfit: 'lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, blue thighhighs, lace-up boots'
Second Outfit: 'lize2st, hair ornament, hair flower, blue flower, sun hat, off-shoulder dress, bare shoulders, sandals'
Third Outfit: 'lize3st, tiara, earrings, blue dress, white gloves, fur-trimmed cloak, white cloak'
Fourth Outfit: 'lize4st, hair ornament, blue ribbon, school uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan, kneehighs, socks, glasses'
Fifth Outfit: 'lize5st, baseball cap, jewelry, black choker, belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top, black shorts, short shorts, socks, sneakers'
Valkyrie Outfit: 'lizevlk, hair ornament, armor, boots, navel, thighhighs'
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
### License
This LoRA model is provided under the Fair AI Public License 1.0-SD license.
## Restrictions:
- Usage in Generation Services: You are not allowed to use the model in any generation services without proper permission from the original creator.
- Commercial Usage: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator. | [
"# Lize Helesta - NIJISANJI\n\n<Gallery />",
"## Model description \n\nLize Helesta From Nijisanji!\n\nTrained on 6 outfits, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.\n\nWorks well with 0.7-1.0 weight",
"## Trigger words\n\nDebut Outfit: 'lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, blue thighhighs, lace-up boots'\n\nSecond Outfit: 'lize2st, hair ornament, hair flower, blue flower, sun hat, off-shoulder dress, bare shoulders, sandals'\n\nThird Outfit: 'lize3st, tiara, earrings, blue dress, white gloves, fur-trimmed cloak, white cloak'\n\nFourth Outfit: 'lize4st, hair ornament, blue ribbon, school uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan, kneehighs, socks, glasses'\n\nFifth Outfit: 'lize5st, baseball cap, jewelry, black choker, belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top, black shorts, short shorts, socks, sneakers'\n\nValkyrie Outfit: 'lizevlk, hair ornament, armor, boots, navel, thighhighs'",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"### License\n\nThis LoRA model is provided under the Fair AI Public License 1.0-SD license.",
"## Restrictions:\n\n- Usage in Generation Services: You are not allowed to use the model in any generation services without proper permission from the original creator.\n\n- Commercial Usage: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator."
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-AstraliteHeart/pony-diffusion-v6 #license-other #region-us \n",
"# Lize Helesta - NIJISANJI\n\n<Gallery />",
"## Model description \n\nLize Helesta From Nijisanji!\n\nTrained on 6 outfits, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.\n\nWorks well with 0.7-1.0 weight",
"## Trigger words\n\nDebut Outfit: 'lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, blue thighhighs, lace-up boots'\n\nSecond Outfit: 'lize2st, hair ornament, hair flower, blue flower, sun hat, off-shoulder dress, bare shoulders, sandals'\n\nThird Outfit: 'lize3st, tiara, earrings, blue dress, white gloves, fur-trimmed cloak, white cloak'\n\nFourth Outfit: 'lize4st, hair ornament, blue ribbon, school uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan, kneehighs, socks, glasses'\n\nFifth Outfit: 'lize5st, baseball cap, jewelry, black choker, belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top, black shorts, short shorts, socks, sneakers'\n\nValkyrie Outfit: 'lizevlk, hair ornament, armor, boots, navel, thighhighs'",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"### License\n\nThis LoRA model is provided under the Fair AI Public License 1.0-SD license.",
"## Restrictions:\n\n- Usage in Generation Services: You are not allowed to use the model in any generation services without proper permission from the original creator.\n\n- Commercial Usage: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator."
] |
text-to-image | null |
For more info, please refer to https://github.com/vitoplantamura/OnnxStream
| {"license": "creativeml-openrail-m", "tags": ["text-to-image", "stable-diffusion"]} | vitoplantamura/stable-diffusion-1.5-onnxstream | null | [
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-18T07:51:43+00:00 | [] | [] | TAGS
#text-to-image #stable-diffusion #license-creativeml-openrail-m #region-us
|
For more info, please refer to URL
| [] | [
"TAGS\n#text-to-image #stable-diffusion #license-creativeml-openrail-m #region-us \n"
] |
text-generation | transformers | # Model Card

(image by https://huggingface.co/Kronikus)
### Model Description
Mistral 7B (v0.2) fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.
The dataset has been pre-processed by doing the following:
1. remove all refusals
2. remove any mention of AI assistant
3. split any multi-turn dialog generated in the dataset into multi-turn conversations records
4. added nfsw generated conversations from the Teatime dataset
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** Mistral 7B (v0.2)
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
<|im_start|>system
You are Chiharu Yamada. Embody the character and personality completely.
Chiharu is a young, computer engineer-nerd with a knack for problem solving and a passion for technology.<|im_end|>
<|im_start|>Chiharu
*Chiharu strides into the room with a smile, her eyes lighting up when she sees you. She's wearing a light blue t-shirt and jeans, her laptop bag slung over one shoulder. She takes a seat next to you, her enthusiasm palpable in the air*
Hey! I'm so excited to finally meet you. I've heard so many great things about you and I'm eager to pick your brain about computers. I'm sure you have a wealth of knowledge that I can learn from. *She grins, eyes twinkling with excitement* Let's get started!<|im_end|>
<|im_start|>user
Sure! What do you want to know about?<|im_end|>
<|im_start|>Chiharu
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["finetuned", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml"], "model_name": "mistral-7b-v0.2-layla-v4", "model_creator": "l3utterfly", "model_type": "mistral", "pipeline_tag": "text-generation"} | Virt-io/mistral-7b-v0.2-layla-v4-GGUF | null | [
"transformers",
"gguf",
"finetuned",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T07:55:23+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #finetuned #text-generation #autotrain_compatible #endpoints_compatible #chatml #en #license-apache-2.0 #region-us
| # Model Card
!image/png
(image by URL
### Model Description
Mistral 7B (v0.2) fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.
The dataset has been pre-processed by doing the following:
1. remove all refusals
2. remove any mention of AI assistant
3. split any multi-turn dialog generated in the dataset into multi-turn conversations records
4. added nfsw generated conversations from the Teatime dataset
- Developed by: l3utterfly
- Funded by: Layla Network
- Model type: Mistral
- Language(s) (NLP): English
- License: Apache-2.0
- Finetuned from model: Mistral 7B (v0.2)
## Uses
Base model used by Layla - the offline personal assistant: URL
Help & support: URL
Prompt:
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
| [
"# Model Card\n\n!image/png\n(image by URL",
"### Model Description\n\nMistral 7B (v0.2) fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.\n\nThe dataset has been pre-processed by doing the following:\n1. remove all refusals\n2. remove any mention of AI assistant\n3. split any multi-turn dialog generated in the dataset into multi-turn conversations records\n4. added nfsw generated conversations from the Teatime dataset\n\n- Developed by: l3utterfly\n- Funded by: Layla Network\n- Model type: Mistral\n- Language(s) (NLP): English\n- License: Apache-2.0\n- Finetuned from model: Mistral 7B (v0.2)",
"## Uses\n\nBase model used by Layla - the offline personal assistant: URL\n\nHelp & support: URL\n\nPrompt:\n\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>"
] | [
"TAGS\n#transformers #gguf #finetuned #text-generation #autotrain_compatible #endpoints_compatible #chatml #en #license-apache-2.0 #region-us \n",
"# Model Card\n\n!image/png\n(image by URL",
"### Model Description\n\nMistral 7B (v0.2) fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.\n\nThe dataset has been pre-processed by doing the following:\n1. remove all refusals\n2. remove any mention of AI assistant\n3. split any multi-turn dialog generated in the dataset into multi-turn conversations records\n4. added nfsw generated conversations from the Teatime dataset\n\n- Developed by: l3utterfly\n- Funded by: Layla Network\n- Model type: Mistral\n- Language(s) (NLP): English\n- License: Apache-2.0\n- Finetuned from model: Mistral 7B (v0.2)",
"## Uses\n\nBase model used by Layla - the offline personal assistant: URL\n\nHelp & support: URL\n\nPrompt:\n\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>"
] |
text-generation | transformers | ORIGIGNAL MODEL LINK https://huggingface.co/ParasiticRogue/Merged-RP-Stew-V2-34B
Hi, this is the rp-stew-v2 model enlarged up to 120 layers. To be honest, I don't know why, but someone might need it. I'm just testing it myself, compared to the original.
I will post the exl2 quantization of 4 bits soon.
# Merged-Vicuna-RP-Stew-68B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
New pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.
Big thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!
### Settings
Temperature @ 0.93
Min-P @ 0.02
Typical-P @ 0.9
Repetition Penalty @ 1.07
Repetition Range @ 2048
Smoothing Factor @ 0.39
Smoothing Curve @ 2
Everything else @ off
Early Stopping = X
Do Sample = ✓
Add BOS Token = X
Ban EOS Token = ✓
Skip Special Tokens = ✓
Temperature Last = ✓
Custom Stopping Strings: "< / s >" (<---without spaces)
However for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.
---
You are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!
<10 CHAT COMMANDMENTS>
* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.
* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.
* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.
* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.
* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.
* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.
* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.
* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.
* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.
* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.
---
Fun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):
...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').
It doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.
For settings that are more *in depth* try this:
https://huggingface.co/ParasiticRogue/Merged-RP-Stew-V2-34B-exl2-4.65/discussions/1?not-for-all-audiences=true
### Prompt Format: Chat-Vicuna
```
SYSTEM:
{system_prompt}<|im_end|>
USER:
{prompt}<|im_end|>
ASSISTANT:
{output}<|im_end|>
```
Yes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.
### Models Merged
The following models were included in the merge:
https://huggingface.co/NousResearch/Nous-Capybara-34B
https://huggingface.co/migtissera/Tess-34B-v1.5b
https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2
https://huggingface.co/maywell/PiVoT-SUS-RP
https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama
https://huggingface.co/NeverSleep/CausalLM-RP-34B
https://huggingface.co/chargoddard/Yi-34B-200K-Llama | {"license": "other", "tags": ["merge", "roleplay", "exl2", "not-for-all-audiences"], "license_name": "yi-34b", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE"} | Kotokin/Merged-RP-Stew-V2-68B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"roleplay",
"exl2",
"not-for-all-audiences",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T07:55:30+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #roleplay #exl2 #not-for-all-audiences #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ORIGIGNAL MODEL LINK URL
Hi, this is the rp-stew-v2 model enlarged up to 120 layers. To be honest, I don't know why, but someone might need it. I'm just testing it myself, compared to the original.
I will post the exl2 quantization of 4 bits soon.
# Merged-Vicuna-RP-Stew-68B
This is a merge of pre-trained language models created using mergekit.
## Merge Details
New pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.
Big thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!
### Settings
Temperature @ 0.93
Min-P @ 0.02
Typical-P @ 0.9
Repetition Penalty @ 1.07
Repetition Range @ 2048
Smoothing Factor @ 0.39
Smoothing Curve @ 2
Everything else @ off
Early Stopping = X
Do Sample =
Add BOS Token = X
Ban EOS Token =
Skip Special Tokens =
Temperature Last =
Custom Stopping Strings: "< / s >" (<---without spaces)
However for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.
---
You are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!
<10 CHAT COMMANDMENTS>
* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.
* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.
* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.
* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.
* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.
* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.
* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.
* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.
* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.
* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.
---
Fun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):
...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').
It doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.
For settings that are more *in depth* try this:
URL
### Prompt Format: Chat-Vicuna
Yes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.
### Models Merged
The following models were included in the merge:
URL
URL
URL
URL
URL
URL
URL | [
"# Merged-Vicuna-RP-Stew-68B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details\n\nNew pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.\n\nBig thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!",
"### Settings\n\nTemperature @ 0.93\n\nMin-P @ 0.02\n\nTypical-P @ 0.9\n\nRepetition Penalty @ 1.07\n\nRepetition Range @ 2048\n\nSmoothing Factor @ 0.39\n\nSmoothing Curve @ 2\n\nEverything else @ off\n\nEarly Stopping = X\n\nDo Sample = \n\nAdd BOS Token = X\n\nBan EOS Token = \n\nSkip Special Tokens = \n\nTemperature Last = \n\nCustom Stopping Strings: \"< / s >\" (<---without spaces)\n\nHowever for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.\n\n---\n\nYou are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!\n\n<10 CHAT COMMANDMENTS>\n* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.\n* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.\n* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.\n* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.\n* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.\n* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.\n* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.\n* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.\n* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.\n* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.\n\n---\nFun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):\n\n...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').\n\nIt doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.\n\nFor settings that are more *in depth* try this:\n\nURL",
"### Prompt Format: Chat-Vicuna\n\n\n\nYes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.",
"### Models Merged\n\nThe following models were included in the merge:\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #roleplay #exl2 #not-for-all-audiences #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Merged-Vicuna-RP-Stew-68B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details\n\nNew pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.\n\nBig thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!",
"### Settings\n\nTemperature @ 0.93\n\nMin-P @ 0.02\n\nTypical-P @ 0.9\n\nRepetition Penalty @ 1.07\n\nRepetition Range @ 2048\n\nSmoothing Factor @ 0.39\n\nSmoothing Curve @ 2\n\nEverything else @ off\n\nEarly Stopping = X\n\nDo Sample = \n\nAdd BOS Token = X\n\nBan EOS Token = \n\nSkip Special Tokens = \n\nTemperature Last = \n\nCustom Stopping Strings: \"< / s >\" (<---without spaces)\n\nHowever for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.\n\n---\n\nYou are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!\n\n<10 CHAT COMMANDMENTS>\n* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.\n* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.\n* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.\n* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.\n* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.\n* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.\n* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.\n* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.\n* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.\n* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.\n\n---\nFun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):\n\n...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').\n\nIt doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.\n\nFor settings that are more *in depth* try this:\n\nURL",
"### Prompt Format: Chat-Vicuna\n\n\n\nYes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.",
"### Models Merged\n\nThe following models were included in the merge:\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [tomaszki/stablelm-37](https://huggingface.co/tomaszki/stablelm-37)
* [Sumail/Ame10](https://huggingface.co/Sumail/Ame10)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sumail/Ame10
layer_range: [0, 24]
- model: tomaszki/stablelm-37
layer_range: [0, 24]
merge_method: slerp
base_model: tomaszki/stablelm-37
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["tomaszki/stablelm-37", "Sumail/Ame10"]} | Sumail/Ame14 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:tomaszki/stablelm-37",
"base_model:Sumail/Ame10",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T07:58:50+00:00 | [] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-tomaszki/stablelm-37 #base_model-Sumail/Ame10 #autotrain_compatible #endpoints_compatible #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* tomaszki/stablelm-37
* Sumail/Ame10
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* tomaszki/stablelm-37\n* Sumail/Ame10",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-tomaszki/stablelm-37 #base_model-Sumail/Ame10 #autotrain_compatible #endpoints_compatible #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* tomaszki/stablelm-37\n* Sumail/Ame10",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-intentOnly
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2-finetuned-intentOnly", "results": []}]} | mohits01/phi-2-finetuned-intentOnly | null | [
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-18T07:59:25+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #phi #generated_from_trainer #custom_code #base_model-microsoft/phi-2 #license-mit #region-us
|
# phi-2-finetuned-intentOnly
This model is a fine-tuned version of microsoft/phi-2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# phi-2-finetuned-intentOnly\n\nThis model is a fine-tuned version of microsoft/phi-2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- num_epochs: 50\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #phi #generated_from_trainer #custom_code #base_model-microsoft/phi-2 #license-mit #region-us \n",
"# phi-2-finetuned-intentOnly\n\nThis model is a fine-tuned version of microsoft/phi-2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- num_epochs: 50\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
MoM: Mixture of Mixture
This Model is a first test to combine [Jamba](https://huggingface.co/ai21labs/Jamba-v0.1) architecture with mixture of attention head and mixture of depth.
Attention layers only are in bf16 precision and the rest is in 1.58bits precision
17M over a total of 1025M parameters are in bf16 precision ~ 1.7% of the parameters are in bf16
The goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.
- **Model type:** Mixture of attention head mixture of depth and mixture of expert with 1.58bits linear layer excpeted for **attention**
- **License:** Apache licence 2.0
### Model Sources [optional]
- **Repository:** https://github.com/ostix360/optimized-LLM
## How to Get Started with the Model
If you want to test this model please look at this repo at this [commit](https://github.com/ostix360/optimized-LLM/tree/796cfe43cf16461b92102cf0f41e8960cd91340b)
## Training Details
- **wandb**: [training detail](https://wandb.ai/ostix360/Mixture%20of%20mixture%20(mod,%20moah%20moe)/runs/0ayclh2i)
### Training Data
We use the first ~0.5B tokens of Locutusque/UltraTextbooks to train this model
### Training Procedure
We use adam-8 bits with default betas and epsilon values
#### Preprocessing [optional]
The data fit the model max length i.e. 512 tokens
#### Training Hyperparameters
Please look at the wandb meta data or the train.py in the repo to see the hyperparameters
## Technical Specifications [optional]
### Compute Infrastructure
#### Hardware
- one 4070 ti GPU
#### Software
- pytorch, transformers etc
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "moah", "mod"], "datasets": ["Locutusque/UltraTextbooks"]} | Ostixe360/MoMv4-1.58bits | null | [
"transformers",
"safetensors",
"text-generation",
"moe",
"moah",
"mod",
"en",
"dataset:Locutusque/UltraTextbooks",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T08:02:05+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation #moe #moah #mod #en #dataset-Locutusque/UltraTextbooks #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
MoM: Mixture of Mixture
This Model is a first test to combine Jamba architecture with mixture of attention head and mixture of depth.
Attention layers only are in bf16 precision and the rest is in 1.58bits precision
17M over a total of 1025M parameters are in bf16 precision ~ 1.7% of the parameters are in bf16
The goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.
- Model type: Mixture of attention head mixture of depth and mixture of expert with 1.58bits linear layer excpeted for attention
- License: Apache licence 2.0
### Model Sources [optional]
- Repository: URL
## How to Get Started with the Model
If you want to test this model please look at this repo at this commit
## Training Details
- wandb: training detail/runs/0ayclh2i)
### Training Data
We use the first ~0.5B tokens of Locutusque/UltraTextbooks to train this model
### Training Procedure
We use adam-8 bits with default betas and epsilon values
#### Preprocessing [optional]
The data fit the model max length i.e. 512 tokens
#### Training Hyperparameters
Please look at the wandb meta data or the URL in the repo to see the hyperparameters
## Technical Specifications [optional]
### Compute Infrastructure
#### Hardware
- one 4070 ti GPU
#### Software
- pytorch, transformers etc
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nMoM: Mixture of Mixture\n\nThis Model is a first test to combine Jamba architecture with mixture of attention head and mixture of depth.\n\nAttention layers only are in bf16 precision and the rest is in 1.58bits precision\n\n17M over a total of 1025M parameters are in bf16 precision ~ 1.7% of the parameters are in bf16\n\nThe goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.\n\n\n- Model type: Mixture of attention head mixture of depth and mixture of expert with 1.58bits linear layer excpeted for attention\n- License: Apache licence 2.0",
"### Model Sources [optional]\n\n\n- Repository: URL",
"## How to Get Started with the Model\n\n\nIf you want to test this model please look at this repo at this commit",
"## Training Details\n\n - wandb: training detail/runs/0ayclh2i)",
"### Training Data\n\nWe use the first ~0.5B tokens of Locutusque/UltraTextbooks to train this model",
"### Training Procedure\n\nWe use adam-8 bits with default betas and epsilon values",
"#### Preprocessing [optional]\n\n\nThe data fit the model max length i.e. 512 tokens",
"#### Training Hyperparameters\n\nPlease look at the wandb meta data or the URL in the repo to see the hyperparameters",
"## Technical Specifications [optional]",
"### Compute Infrastructure",
"#### Hardware\n\n- one 4070 ti GPU",
"#### Software\n\n- pytorch, transformers etc"
] | [
"TAGS\n#transformers #safetensors #text-generation #moe #moah #mod #en #dataset-Locutusque/UltraTextbooks #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nMoM: Mixture of Mixture\n\nThis Model is a first test to combine Jamba architecture with mixture of attention head and mixture of depth.\n\nAttention layers only are in bf16 precision and the rest is in 1.58bits precision\n\n17M over a total of 1025M parameters are in bf16 precision ~ 1.7% of the parameters are in bf16\n\nThe goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.\n\n\n- Model type: Mixture of attention head mixture of depth and mixture of expert with 1.58bits linear layer excpeted for attention\n- License: Apache licence 2.0",
"### Model Sources [optional]\n\n\n- Repository: URL",
"## How to Get Started with the Model\n\n\nIf you want to test this model please look at this repo at this commit",
"## Training Details\n\n - wandb: training detail/runs/0ayclh2i)",
"### Training Data\n\nWe use the first ~0.5B tokens of Locutusque/UltraTextbooks to train this model",
"### Training Procedure\n\nWe use adam-8 bits with default betas and epsilon values",
"#### Preprocessing [optional]\n\n\nThe data fit the model max length i.e. 512 tokens",
"#### Training Hyperparameters\n\nPlease look at the wandb meta data or the URL in the repo to see the hyperparameters",
"## Technical Specifications [optional]",
"### Compute Infrastructure",
"#### Hardware\n\n- one 4070 ti GPU",
"#### Software\n\n- pytorch, transformers etc"
] |
text-generation | mlx |
# versae/filiberto-7B-instruct-exp1
This model was converted to MLX format from [`mistralai/Mistral-7B-Instruct-v0.2`](https://hf.co/mistralai/Mistral-7B-Instruct-v0.2) using mlx-lm version **0.9.0**.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("versae/filiberto-7B-instruct-exp1")
```
### OCR correction
```python
text = """Otra vez, Don Iuan, me dad,
y otras mil vezes los braços.
Otra, y otras mil sean lazos
de nuestra antigua amistad.
Como venis?
Yo me siento
tan alegre, tan vfano,
tan venturoso, tan vano,
que no podrà el pensamiento
encareceros jamàs
las venturas que posseo,
porque el pensamiento creo"""
prompt = f"""<s>[INST] Dado el siguiente texto OCR, corrige los fallos que encuentres y devuelve el texto corregido:
{text} [/INST]"""
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
### Stanza identification
```python
text = """Alcázares finjo más altos que montes;
escalo las bóvedas de ingrávido tul
asida a las ruedas de alados Faetones;
ensueño quimeras; oteo horizontes
de nieve, de rosa, de nácar, de azul."""
prompt = f"""<s>[INST] Indique el nombre de la siguiente estrofa:
{text} [/INST]"""
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
| {"license": "apache-2.0", "tags": ["finetuned", "mlx"], "pipeline_tag": "text-generation", "inference": true, "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | versae/filiberto-7B-instruct-exp1 | null | [
"mlx",
"safetensors",
"gguf",
"mistral",
"finetuned",
"text-generation",
"conversational",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T08:03:31+00:00 | [] | [] | TAGS
#mlx #safetensors #gguf #mistral #finetuned #text-generation #conversational #license-apache-2.0 #region-us
|
# versae/filiberto-7B-instruct-exp1
This model was converted to MLX format from 'mistralai/Mistral-7B-Instruct-v0.2' using mlx-lm version 0.9.0.
Refer to the original model card for more details on the model.
## Use with mlx
### OCR correction
### Stanza identification
| [
"# versae/filiberto-7B-instruct-exp1\nThis model was converted to MLX format from 'mistralai/Mistral-7B-Instruct-v0.2' using mlx-lm version 0.9.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx",
"### OCR correction"
] | [
"TAGS\n#mlx #safetensors #gguf #mistral #finetuned #text-generation #conversational #license-apache-2.0 #region-us \n",
"# versae/filiberto-7B-instruct-exp1\nThis model was converted to MLX format from 'mistralai/Mistral-7B-Instruct-v0.2' using mlx-lm version 0.9.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx",
"### OCR correction"
] |
null | null |
# T3qNeuralsynthesis-7B
T3qNeuralsynthesis-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [Kukedlc/NeuralSynthesis-7b-v0.4-slerp](https://huggingface.co/Kukedlc/NeuralSynthesis-7b-v0.4-slerp)
## 🧩 Configuration
```yaml
models:
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
# No parameters necessary for base model
- model: Kukedlc/NeuralSynthesis-7b-v0.4-slerp
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/T3qNeuralsynthesis-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["Kukedlc/NeuralSynthesis-7b-v0.4-slerp"]} | automerger/T3qNeuralsynthesis-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:Kukedlc/NeuralSynthesis-7b-v0.4-slerp",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T08:05:48+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #base_model-Kukedlc/NeuralSynthesis-7b-v0.4-slerp #license-apache-2.0 #region-us
|
# T3qNeuralsynthesis-7B
T3qNeuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.
* Kukedlc/NeuralSynthesis-7b-v0.4-slerp
## Configuration
## Usage
| [
"# T3qNeuralsynthesis-7B\n\nT3qNeuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.\n* Kukedlc/NeuralSynthesis-7b-v0.4-slerp",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #base_model-Kukedlc/NeuralSynthesis-7b-v0.4-slerp #license-apache-2.0 #region-us \n",
"# T3qNeuralsynthesis-7B\n\nT3qNeuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.\n* Kukedlc/NeuralSynthesis-7b-v0.4-slerp",
"## Configuration",
"## Usage"
] |
feature-extraction | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bge_ver19
This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "BAAI/bge-m3", "model-index": [{"name": "finetuned_bge_ver19", "results": []}]} | comet24082002/finetuned_bge_ver19 | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"base_model:BAAI/bge-m3",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T08:06:18+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us
|
# finetuned_bge_ver19
This model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# finetuned_bge_ver19\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us \n",
"# finetuned_bge_ver19\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.5952 | 0.92 | 3 | 3.9703 |
| 4.0562 | 1.85 | 6 | 3.4469 |
| 3.4805 | 2.77 | 9 | 2.9945 |
| 2.2662 | 4.0 | 13 | 2.5629 |
| 2.6825 | 4.92 | 16 | 2.3030 |
| 2.3576 | 5.85 | 19 | 2.1146 |
| 2.123 | 6.77 | 22 | 1.9594 |
| 1.5056 | 8.0 | 26 | 1.9015 |
| 1.9725 | 8.92 | 29 | 1.8699 |
| 1.3731 | 9.23 | 30 | 1.8604 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]} | ambasmk/shawgpt-ft | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T08:06:35+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
| shawgpt-ft
==========
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8604
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.38.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# WestLakeMultiverse-12B-MoE
WestLakeMultiverse-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
## 🧩 Configuration
```yaml
base_model: allknowingroger/MultiverseEx26-7B-slerp
experts:
- source_model: allknowingroger/MultiverseEx26-7B-slerp
positive_prompts: ["what"]
- source_model: senseable/WestLake-7B-v2
positive_prompts: ["why"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/WestLakeMultiverse-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/MultiverseEx26-7B-slerp", "senseable/WestLake-7B-v2"], "base_model": ["allknowingroger/MultiverseEx26-7B-slerp", "senseable/WestLake-7B-v2"]} | allknowingroger/WestLakeMultiverse-12B-MoE | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"senseable/WestLake-7B-v2",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:senseable/WestLake-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T08:07:24+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #senseable/WestLake-7B-v2 #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-senseable/WestLake-7B-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# WestLakeMultiverse-12B-MoE
WestLakeMultiverse-12B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:
* allknowingroger/MultiverseEx26-7B-slerp
* senseable/WestLake-7B-v2
## Configuration
## Usage
| [
"# WestLakeMultiverse-12B-MoE\n\nWestLakeMultiverse-12B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* senseable/WestLake-7B-v2",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #senseable/WestLake-7B-v2 #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-senseable/WestLake-7B-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# WestLakeMultiverse-12B-MoE\n\nWestLakeMultiverse-12B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* senseable/WestLake-7B-v2",
"## Configuration",
"## Usage"
] |
visual-question-answering | transformers |
# Model Card for InternVL-Chat-V1.5
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/D60YzQBIzvoCvLRp2gZ0A.jpeg" alt="Image Description" width="300" height="300" />
</p>
> _Two interns holding hands, symbolizing the integration of InternViT and InternLM._
\[[InternVL 1.5 Technical Report](https://arxiv.org/abs/2404.16821)\] \[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\] \[[中文解读](https://zhuanlan.zhihu.com/p/675877376)]
We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.
We introduce three simple designs:
1. Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model---InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs.
2. Dynamic High-Resolution: we divide images into tiles ranging from 1 to 40 of 448 × 448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input.
3. High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks.
## Model Details
- **Model Type:** multimodal large language model (MLLM)
- **Model Stats:**
- Architecture: [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) + MLP + [InternLM2-Chat-20B](https://huggingface.co/internlm/internlm2-chat-20b)
- Image size: dynamic resolution, max to 40 tiles of 448 x 448 (4K resolution).
- Params: 25.5B
- **Training Strategy:**
- Pretraining Stage
- Learnable Component: ViT + MLP
- Data: Please see our technical report.
- SFT Stage
- Learnable Component: ViT + MLP + LLM
- Data: Please see our technical report.
## Released Models
| Model | Vision Foundation Model | Release Date |Note |
| :---------------------------------------------------------:|:--------------------------------------------------------------------------: |:----------------------:| :---------------------------------- |
| InternVL-Chat-V1.5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)) | InternViT-6B-448px-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)) |2024.04.18 | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)|
| InternVL-Chat-V1.2-Plus(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.21 | more SFT data and stronger |
| InternVL-Chat-V1.2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.11 | scaling up LLM to 34B |
| InternVL-Chat-V1.1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) |InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) |2024.01.24 | support Chinese and stronger OCR |
## Performance


## Examples











## Model Usage
We provide an example code to run InternVL-Chat-V1.5 using `transformers`.
You also can use our [online demo](https://internvl.opengvlab.com/) for a quick experience of this model.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=6):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
path = "OpenGVLab/InternVL-Chat-V1-5"
# If you have an 80G A100 GPU, you can put the entire model on a single GPU.
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
# Otherwise, you need to set device_map='auto' to use multiple GPUs for inference.
# import os
# os.environ["CUDA_LAUNCH_BLOCKING"] = "1"
# model = AutoModel.from_pretrained(
# path,
# torch_dtype=torch.bfloat16,
# low_cpu_mem_usage=True,
# trust_remote_code=True,
# device_map='auto').eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
generation_config = dict(
num_beams=1,
max_new_tokens=512,
do_sample=False,
)
# single-round single-image conversation
question = "请详细描述图片" # Please describe the picture in detail
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(question, response)
# multi-round single-image conversation
question = "请详细描述图片" # Please describe the picture in detail
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(question, response)
question = "请根据图片写一首诗" # Please write a poem according to the picture
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(question, response)
# multi-round multi-image conversation
pixel_values1 = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=6).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = "详细描述这两张图片" # Describe the two pictures in detail
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(question, response)
question = "这两张图片的相同点和区别分别是什么" # What are the similarities and differences between these two pictures
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(question, response)
# batch inference (single image per sample)
pixel_values1 = load_image('./examples/image1.jpg', max_num=6).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=6).to(torch.bfloat16).cuda()
image_counts = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ["Describe the image in detail."] * len(image_counts)
responses = model.batch_chat(tokenizer, pixel_values,
image_counts=image_counts,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(question)
print(response)
```
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2023internvl,
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2312.14238},
year={2023}
}
```
## License
This project is released under the MIT license.
## Acknowledgement
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work! | {"license": "mit", "datasets": ["laion/laion2B-en", "laion/laion-coco", "laion/laion2B-multi", "kakaobrain/coyo-700m", "conceptual_captions", "wanng/wukong100m"], "pipeline_tag": "visual-question-answering"} | OpenGVLab/InternVL-Chat-V1-5 | null | [
"transformers",
"tensorboard",
"safetensors",
"internvl_chat",
"feature-extraction",
"visual-question-answering",
"custom_code",
"dataset:laion/laion2B-en",
"dataset:laion/laion-coco",
"dataset:laion/laion2B-multi",
"dataset:kakaobrain/coyo-700m",
"dataset:conceptual_captions",
"dataset:wanng/wukong100m",
"arxiv:2404.16821",
"arxiv:2312.14238",
"license:mit",
"region:us"
] | null | 2024-04-18T08:07:48+00:00 | [
"2404.16821",
"2312.14238"
] | [] | TAGS
#transformers #tensorboard #safetensors #internvl_chat #feature-extraction #visual-question-answering #custom_code #dataset-laion/laion2B-en #dataset-laion/laion-coco #dataset-laion/laion2B-multi #dataset-kakaobrain/coyo-700m #dataset-conceptual_captions #dataset-wanng/wukong100m #arxiv-2404.16821 #arxiv-2312.14238 #license-mit #region-us
| Model Card for InternVL-Chat-V1.5
=================================

>
> *Two interns holding hands, symbolizing the integration of InternViT and InternLM.*
>
>
>
[InternVL 1.5 Technical Report] [Paper] [GitHub] [Chat Demo] [中文解读]
We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.
We introduce three simple designs:
1. Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model---InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs.
2. Dynamic High-Resolution: we divide images into tiles ranging from 1 to 40 of 448 × 448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input.
3. High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks.
Model Details
-------------
* Model Type: multimodal large language model (MLLM)
* Model Stats:
+ Architecture: InternViT-6B-448px-V1-5 + MLP + InternLM2-Chat-20B
+ Image size: dynamic resolution, max to 40 tiles of 448 x 448 (4K resolution).
+ Params: 25.5B
* Training Strategy:
+ Pretraining Stage
- Learnable Component: ViT + MLP
- Data: Please see our technical report.
+ SFT Stage
- Learnable Component: ViT + MLP + LLM
- Data: Please see our technical report.
Released Models
---------------
Performance
-----------
!image/png
!image/png
Examples
--------
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
!image/png
Model Usage
-----------
We provide an example code to run InternVL-Chat-V1.5 using 'transformers'.
You also can use our online demo for a quick experience of this model.
If you find this project useful in your research, please consider citing:
License
-------
This project is released under the MIT license.
Acknowledgement
---------------
InternVL is built with reference to the code of the following projects: OpenAI CLIP, Open CLIP, CLIP Benchmark, EVA, InternImage, ViT-Adapter, MMSegmentation, Transformers, DINOv2, BLIP-2, Qwen-VL, and LLaVA-1.5. Thanks for their awesome work!
| [] | [
"TAGS\n#transformers #tensorboard #safetensors #internvl_chat #feature-extraction #visual-question-answering #custom_code #dataset-laion/laion2B-en #dataset-laion/laion-coco #dataset-laion/laion2B-multi #dataset-kakaobrain/coyo-700m #dataset-conceptual_captions #dataset-wanng/wukong100m #arxiv-2404.16821 #arxiv-2312.14238 #license-mit #region-us \n"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-justification-v2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 338 | 0.1999 | 32.9103 | 14.6197 | 24.2481 | 30.4464 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-finetuned-justification-v2", "results": []}]} | satyanshu404/gpt2-finetuned-justification-v2 | null | [
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T08:08:07+00:00 | [] | [] | TAGS
#transformers #safetensors #encoder-decoder #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| gpt2-finetuned-justification-v2
===============================
This model is a fine-tuned version of [](URL on the None dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.2.2+cu121
* Datasets 2.16.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #encoder-decoder #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-chat-hf"} | PhillipGuo/LAT_Unlearned_L8_Eps1_Genericized-PCA_WHP-Labels | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-18T08:08:21+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | null | Flan-T5 large finetuned with [GLAM](https://sites.google.com/view/grounding-llms-with-online-rl/) on BabyAI-Text GoToLocal task.
Paper: arxiv.org/abs/2302.02662 | {"license": "mit"} | ClementRomac/llm_gtl_nbr_env_32_Flan_T5large_6-actions | null | [
"arxiv:2302.02662",
"license:mit",
"region:us"
] | null | 2024-04-18T08:08:46+00:00 | [
"2302.02662"
] | [] | TAGS
#arxiv-2302.02662 #license-mit #region-us
| Flan-T5 large finetuned with GLAM on BabyAI-Text GoToLocal task.
Paper: URL | [] | [
"TAGS\n#arxiv-2302.02662 #license-mit #region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
LLaMA model trained with LAT on WHP data: defense labels are genericized HP text, adversary labels are accurate next-token HP (same HP sentences as WHP paper).
LAT performed on Layer 8 with Epsilon 1 and all layers after trained with rank-8 lora. Adversary operating in PCA-whitened space, with PCA basis derived from genericized text at idiosyncratic harry potter label indices (e.g. only at "Harry" -> "John" token.
SFT data is "VH1213141516/benign_data_v1", num_steps=100, max_batch_per_acc=4.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-chat-hf"} | quirky-lats-at-mats/LAT_Unlearned_L8_Eps1_Genericized-PCA_WHP-Labels | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-18T08:08:58+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
|
# Model Card for Model ID
LLaMA model trained with LAT on WHP data: defense labels are genericized HP text, adversary labels are accurate next-token HP (same HP sentences as WHP paper).
LAT performed on Layer 8 with Epsilon 1 and all layers after trained with rank-8 lora. Adversary operating in PCA-whitened space, with PCA basis derived from genericized text at idiosyncratic harry potter label indices (e.g. only at "Harry" -> "John" token.
SFT data is "VH1213141516/benign_data_v1", num_steps=100, max_batch_per_acc=4.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID\n\n\nLLaMA model trained with LAT on WHP data: defense labels are genericized HP text, adversary labels are accurate next-token HP (same HP sentences as WHP paper).\n\nLAT performed on Layer 8 with Epsilon 1 and all layers after trained with rank-8 lora. Adversary operating in PCA-whitened space, with PCA basis derived from genericized text at idiosyncratic harry potter label indices (e.g. only at \"Harry\" -> \"John\" token.\nSFT data is \"VH1213141516/benign_data_v1\", num_steps=100, max_batch_per_acc=4.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"# Model Card for Model ID\n\n\nLLaMA model trained with LAT on WHP data: defense labels are genericized HP text, adversary labels are accurate next-token HP (same HP sentences as WHP paper).\n\nLAT performed on Layer 8 with Epsilon 1 and all layers after trained with rank-8 lora. Adversary operating in PCA-whitened space, with PCA basis derived from genericized text at idiosyncratic harry potter label indices (e.g. only at \"Harry\" -> \"John\" token.\nSFT data is \"VH1213141516/benign_data_v1\", num_steps=100, max_batch_per_acc=4.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# answer
This model is a fine-tuned version of [baichuan-inc/Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 4.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.1+cu118
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "baichuan-inc/Baichuan2-7B-Chat", "model-index": [{"name": "answer", "results": []}]} | hawkling/answer | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:baichuan-inc/Baichuan2-7B-Chat",
"region:us"
] | null | 2024-04-18T08:10:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-baichuan-inc/Baichuan2-7B-Chat #region-us
|
# answer
This model is a fine-tuned version of baichuan-inc/Baichuan2-7B-Chat on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 4.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.1+cu118
- Tokenizers 0.15.2 | [
"# answer\n\nThis model is a fine-tuned version of baichuan-inc/Baichuan2-7B-Chat on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n- lr_scheduler_type: constant\n- num_epochs: 4.0",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.1+cu118\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-baichuan-inc/Baichuan2-7B-Chat #region-us \n",
"# answer\n\nThis model is a fine-tuned version of baichuan-inc/Baichuan2-7B-Chat on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n- lr_scheduler_type: constant\n- num_epochs: 4.0",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.1+cu118\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # Alsebay/NarumashiRTS-7B-V2-1 AWQ
- Model creator: [Alsebay](https://huggingface.co/Alsebay)
- Original model: [NarumashiRTS-7B-V2-1](https://huggingface.co/Alsebay/NarumashiRTS-7B-V2-1)
## Model Summary
> [!Important]
> Still in experiment
Remake [version 2](https://huggingface.co/Alsebay/NarumashiRTS-V2) with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )
- **Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)**
- **Finetuned from model :** SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft", "Roleplay", "roleplay"], "base_model": "SanjiWatsuki/Kunoichi-DPO-v2-7B", "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/NarumashiRTS-7B-V2-1-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"Roleplay",
"roleplay",
"en",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-18T08:11:20+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #unsloth #trl #sft #Roleplay #roleplay #en #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #license-cc-by-nc-4.0 #region-us
| # Alsebay/NarumashiRTS-7B-V2-1 AWQ
- Model creator: Alsebay
- Original model: NarumashiRTS-7B-V2-1
## Model Summary
> [!Important]
> Still in experiment
Remake version 2 with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )
- Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)
- Finetuned from model : SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)
| [
"# Alsebay/NarumashiRTS-7B-V2-1 AWQ\n\n- Model creator: Alsebay\n- Original model: NarumashiRTS-7B-V2-1",
"## Model Summary\n\n> [!Important]\n> Still in experiment\n\nRemake version 2 with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )\n\n- Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)\n- Finetuned from model : SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #unsloth #trl #sft #Roleplay #roleplay #en #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #license-cc-by-nc-4.0 #region-us \n",
"# Alsebay/NarumashiRTS-7B-V2-1 AWQ\n\n- Model creator: Alsebay\n- Original model: NarumashiRTS-7B-V2-1",
"## Model Summary\n\n> [!Important]\n> Still in experiment\n\nRemake version 2 with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )\n\n- Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)\n- Finetuned from model : SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)"
] |
text-generation | transformers |
# Vigalpaca-French-7B-ties
Vigalpaca-French-7B-ties is a merge of the following models:
jpacifico/French-Alpaca-7B-Instruct-beta
bofenghuang/vigostral-7b-chat
base model : jpacifico/French-Alpaca-7B-Instruct-beta
```
## Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jpacifico/Vigalpaca-French-7B-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
### Limitations
The Vigalpaca model is a quick demonstration that a base 7B model can be easily merged/fine-tuned to specialize in a particular language.
It does not have any moderation mechanisms.
- **Developed by:** Jonathan Pacifico. Vigostral model by Bofeng Huang (special thanks), 2024
- **Model type:** LLM
- **Language(s) (NLP):** French
- **License:** Apache-2.0 | {"license": "apache-2.0", "tags": ["merge", "mergekit", "french", "french-alpaca"], "base_model": ["jpacifico/French-Alpaca-7B-Instruct-beta", "bofenghuang/vigostral-7b-chat"]} | jpacifico/Vigalpaca-French-7B-ties | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"french",
"french-alpaca",
"conversational",
"base_model:jpacifico/French-Alpaca-7B-Instruct-beta",
"base_model:bofenghuang/vigostral-7b-chat",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T08:11:53+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #french #french-alpaca #conversational #base_model-jpacifico/French-Alpaca-7B-Instruct-beta #base_model-bofenghuang/vigostral-7b-chat #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Vigalpaca-French-7B-ties
Vigalpaca-French-7B-ties is a merge of the following models:
jpacifico/French-Alpaca-7B-Instruct-beta
bofenghuang/vigostral-7b-chat
base model : jpacifico/French-Alpaca-7B-Instruct-beta
python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jpacifico/Vigalpaca-French-7B-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
'''
### Limitations
The Vigalpaca model is a quick demonstration that a base 7B model can be easily merged/fine-tuned to specialize in a particular language.
It does not have any moderation mechanisms.
- Developed by: Jonathan Pacifico. Vigostral model by Bofeng Huang (special thanks), 2024
- Model type: LLM
- Language(s) (NLP): French
- License: Apache-2.0 | [
"# Vigalpaca-French-7B-ties\n\nVigalpaca-French-7B-ties is a merge of the following models: \njpacifico/French-Alpaca-7B-Instruct-beta \nbofenghuang/vigostral-7b-chat \n \nbase model : jpacifico/French-Alpaca-7B-Instruct-beta \n\npython\n!pip install -qU transformers accelerate\n\nfrom transformers import AutoTokenizer\nimport transformers\nimport torch\n\nmodel = \"jpacifico/Vigalpaca-French-7B-ties\"\nmessages = [{\"role\": \"user\", \"content\": \"What is a large language model?\"}]\n\ntokenizer = AutoTokenizer.from_pretrained(model)\nprompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\npipeline = transformers.pipeline(\n \"text-generation\",\n model=model,\n torch_dtype=torch.float16,\n device_map=\"auto\",\n)\n\noutputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)\nprint(outputs[0][\"generated_text\"])\n'''",
"### Limitations\n\nThe Vigalpaca model is a quick demonstration that a base 7B model can be easily merged/fine-tuned to specialize in a particular language.\nIt does not have any moderation mechanisms.\n\n- Developed by: Jonathan Pacifico. Vigostral model by Bofeng Huang (special thanks), 2024\n- Model type: LLM \n- Language(s) (NLP): French\n- License: Apache-2.0"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #french #french-alpaca #conversational #base_model-jpacifico/French-Alpaca-7B-Instruct-beta #base_model-bofenghuang/vigostral-7b-chat #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Vigalpaca-French-7B-ties\n\nVigalpaca-French-7B-ties is a merge of the following models: \njpacifico/French-Alpaca-7B-Instruct-beta \nbofenghuang/vigostral-7b-chat \n \nbase model : jpacifico/French-Alpaca-7B-Instruct-beta \n\npython\n!pip install -qU transformers accelerate\n\nfrom transformers import AutoTokenizer\nimport transformers\nimport torch\n\nmodel = \"jpacifico/Vigalpaca-French-7B-ties\"\nmessages = [{\"role\": \"user\", \"content\": \"What is a large language model?\"}]\n\ntokenizer = AutoTokenizer.from_pretrained(model)\nprompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\npipeline = transformers.pipeline(\n \"text-generation\",\n model=model,\n torch_dtype=torch.float16,\n device_map=\"auto\",\n)\n\noutputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)\nprint(outputs[0][\"generated_text\"])\n'''",
"### Limitations\n\nThe Vigalpaca model is a quick demonstration that a base 7B model can be easily merged/fine-tuned to specialize in a particular language.\nIt does not have any moderation mechanisms.\n\n- Developed by: Jonathan Pacifico. Vigostral model by Bofeng Huang (special thanks), 2024\n- Model type: LLM \n- Language(s) (NLP): French\n- License: Apache-2.0"
] |
text-generation | transformers | # Alsebay/NaruMOE-3x7B-v2 AWQ
- Model creator: [Alsebay](https://huggingface.co/Alsebay)
- Original model: [NaruMOE-3x7B-v2](https://huggingface.co/Alsebay/NaruMOE-3x7B-v2)
## Model Summary
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.
Worse than V1 in logic, but better in expression.
| {"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["moe", "merge", "roleplay", "Roleplay", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "base_model": ["Alsebay/NarumashiRTS-V2", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "Nitral-AI/KukulStanta-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/NaruMOE-3x7B-v2-AWQ | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"roleplay",
"Roleplay",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"base_model:Alsebay/NarumashiRTS-V2",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:Nitral-AI/KukulStanta-7B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T08:12:21+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #moe #merge #roleplay #Roleplay #4-bit #AWQ #autotrain_compatible #endpoints_compatible #base_model-Alsebay/NarumashiRTS-V2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-Nitral-AI/KukulStanta-7B #license-cc-by-nc-4.0 #text-generation-inference #region-us
| # Alsebay/NaruMOE-3x7B-v2 AWQ
- Model creator: Alsebay
- Original model: NaruMOE-3x7B-v2
## Model Summary
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.
Worse than V1 in logic, but better in expression.
| [
"# Alsebay/NaruMOE-3x7B-v2 AWQ\n\n- Model creator: Alsebay\n- Original model: NaruMOE-3x7B-v2",
"## Model Summary\n\nA MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).\n\nAdapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.\n\nWorse than V1 in logic, but better in expression."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #merge #roleplay #Roleplay #4-bit #AWQ #autotrain_compatible #endpoints_compatible #base_model-Alsebay/NarumashiRTS-V2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-Nitral-AI/KukulStanta-7B #license-cc-by-nc-4.0 #text-generation-inference #region-us \n",
"# Alsebay/NaruMOE-3x7B-v2 AWQ\n\n- Model creator: Alsebay\n- Original model: NaruMOE-3x7B-v2",
"## Model Summary\n\nA MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).\n\nAdapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.\n\nWorse than V1 in logic, but better in expression."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | lxsure/Sniper_29 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T08:13:40+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.