pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
automatic-speech-recognition | pyannote | This is the model card of a pyannote pipeline that has been pushed on the Hub. This model card has been automatically generated. | {"library_name": "pyannote", "tags": ["pyannote", "pyannote.audio", "pyannote-audio-pipeline", "audio", "voice", "speech", "speaker", "speaker-diarization", "speaker-change-detection", "voice-activity-detection", "overlapped-speech-detection", "automatic-speech-recognition"], "licence": "mit"} | kamilakesbi/spk_diarize_test | null | [
"pyannote",
"pyannote.audio",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"region:us"
] | null | 2024-04-29T11:59:44+00:00 | [] | [] | TAGS
#pyannote #pyannote.audio #pyannote-audio-pipeline #audio #voice #speech #speaker #speaker-diarization #speaker-change-detection #voice-activity-detection #overlapped-speech-detection #automatic-speech-recognition #region-us
| This is the model card of a pyannote pipeline that has been pushed on the Hub. This model card has been automatically generated. | [] | [
"TAGS\n#pyannote #pyannote.audio #pyannote-audio-pipeline #audio #voice #speech #speaker #speaker-diarization #speaker-change-detection #voice-activity-detection #overlapped-speech-detection #automatic-speech-recognition #region-us \n"
] |
text-generation | null |
# Bad GPT
Based on the [Let's build GPT](https://www.youtube.com/watch?v=kCc8FmEb1nY) video from Andrej Karpathy.
This is just an attempt to recreate the transformer Andrej made in his video with the goal of learning more about torch, transformers, and neural networks in general.
To run, make sure `python` `3.10` and `poetry` are installed. You can then run `poetry install` to get the dependencies (it's just torch and numpy).
Finally, you can run the code with `poetry run python ./main.py`
Note that the first run will train the model and then save the trained weights to `model.pth`. Subsequent runs will load these weights. | {"language": ["en"], "license": "mit", "datasets": ["karpathy/tiny_shakespeare"], "pipeline_tag": "text-generation"} | shamashel/bad-gpt | null | [
"text-generation",
"en",
"dataset:karpathy/tiny_shakespeare",
"license:mit",
"region:us"
] | null | 2024-04-29T11:59:46+00:00 | [] | [
"en"
] | TAGS
#text-generation #en #dataset-karpathy/tiny_shakespeare #license-mit #region-us
|
# Bad GPT
Based on the Let's build GPT video from Andrej Karpathy.
This is just an attempt to recreate the transformer Andrej made in his video with the goal of learning more about torch, transformers, and neural networks in general.
To run, make sure 'python' '3.10' and 'poetry' are installed. You can then run 'poetry install' to get the dependencies (it's just torch and numpy).
Finally, you can run the code with 'poetry run python ./URL'
Note that the first run will train the model and then save the trained weights to 'URL'. Subsequent runs will load these weights. | [
"# Bad GPT\n\nBased on the Let's build GPT video from Andrej Karpathy.\n\nThis is just an attempt to recreate the transformer Andrej made in his video with the goal of learning more about torch, transformers, and neural networks in general.\n\nTo run, make sure 'python' '3.10' and 'poetry' are installed. You can then run 'poetry install' to get the dependencies (it's just torch and numpy).\n\nFinally, you can run the code with 'poetry run python ./URL'\n\nNote that the first run will train the model and then save the trained weights to 'URL'. Subsequent runs will load these weights."
] | [
"TAGS\n#text-generation #en #dataset-karpathy/tiny_shakespeare #license-mit #region-us \n",
"# Bad GPT\n\nBased on the Let's build GPT video from Andrej Karpathy.\n\nThis is just an attempt to recreate the transformer Andrej made in his video with the goal of learning more about torch, transformers, and neural networks in general.\n\nTo run, make sure 'python' '3.10' and 'poetry' are installed. You can then run 'poetry install' to get the dependencies (it's just torch and numpy).\n\nFinally, you can run the code with 'poetry run python ./URL'\n\nNote that the first run will train the model and then save the trained weights to 'URL'. Subsequent runs will load these weights."
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1883
- Accuracy: 0.9540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3824 | 1.0 | 370 | 0.2976 | 0.9310 |
| 0.1977 | 2.0 | 740 | 0.2266 | 0.9378 |
| 0.1885 | 3.0 | 1110 | 0.2039 | 0.9378 |
| 0.1288 | 4.0 | 1480 | 0.1941 | 0.9405 |
| 0.1368 | 5.0 | 1850 | 0.1894 | 0.9418 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224", "model-index": [{"name": "vit-base-oxford-iiit-pets", "results": []}]} | walterg777/vit-base-oxford-iiit-pets | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:00:17+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| vit-base-oxford-iiit-pets
=========================
This model is a fine-tuned version of google/vit-base-patch16-224 on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1883
* Accuracy: 0.9540
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | zementalist/llama-3-8B-chat-psychotherapist-v2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-29T12:00:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | SparshSyde/xlmroberta_detector_system_prompt_leak_iter2 | null | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:01:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LewPerren/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.4589
- Validation Loss: 2.3088
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4589 | 2.3088 | 0 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "LewPerren/my_awesome_qa_model", "results": []}]} | LewPerren/my_awesome_qa_model | null | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:02:51+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| LewPerren/my\_awesome\_qa\_model
================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 3.4589
* Validation Loss: 2.3088
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 500, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.0
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | null |
# Introduction
Novora Code Classifier v1 Tiny, is a tiny `Text Classification` model, which classifies given code text input under 1 of `31` different classes (programming languages).
This model is designed to be able to run on CPU, but optimally runs on GPUs.
# Info
- 1 of 31 classes output
- 512 token input dimension
- 64 hidden dimensions
- 2 linear layers
- The `snowflake-arctic-embed-xs` model is used as the embeddings model.
- Dataset split into 80% training set, 20% testing set.
- The combined test and training data is around 1000 chunks per programming language, the data is 31,100 chunks (entries) as 512 tokens per chunk, being a snippet of the code.
- Picked from the 18th epoch out of 20 done.
# Architecture
The `CodeClassifier-v1-Tiny` model employs a neural network architecture optimized for text classification tasks, specifically for classifying programming languages from code snippets. This model includes:
- **Bidirectional LSTM Feature Extractor**: This bidirectional LSTM layer processes input embeddings, effectively capturing contextual relationships in both forward and reverse directions within the code snippets.
- **Fully Connected Layers**: The network includes two linear layers. The first projects the pooled features into a hidden feature space, and the second linear layer maps these to the output classes, which correspond to different programming languages. A dropout layer with a rate of 0.5 between these layers helps mitigate overfitting.
The model's bidirectional nature and architectural components make it adept at understanding the syntax and structure crucial for code classification.
# Testing/Training Datasets
I have put here the samples entered into the training/testing pipeline, its a very small amount.
| Language | Testing Count | Training Count |
|--------------|---------------|----------------|
| Ada | 20 | 80 |
| Assembly | 20 | 80 |
| C | 20 | 80 |
| C# | 20 | 80 |
| C++ | 20 | 80 |
| COBOL | 14 | 55 |
| Common Lisp | 20 | 80 |
| Dart | 20 | 80 |
| Erlang | 20 | 80 |
| F# | 20 | 80 |
| Go | 20 | 80 |
| Haskell | 20 | 80 |
| Java | 20 | 80 |
| JavaScript | 20 | 80 |
| Julia | 20 | 80 |
| Kotlin | 20 | 80 |
| Lua | 20 | 80 |
| MATLAB | 20 | 80 |
| PHP | 20 | 80 |
| Perl | 20 | 80 |
| Prolog | 1 | 4 |
| Python | 20 | 80 |
| R | 20 | 80 |
| Ruby | 20 | 80 |
| Rust | 20 | 80 |
| SQL | 20 | 80 |
| Scala | 20 | 80 |
| Swift | 20 | 80 |
| TypeScript | 20 | 80 |
# Example Code
```python
import torch.nn as nn
import torch.nn.functional as F
class CodeClassifier(nn.Module):
def __init__(self, num_classes, embedding_dim, hidden_dim, num_layers, bidirectional=False):
super(CodeClassifier, self).__init__()
self.feature_extractor = nn.LSTM(embedding_dim, hidden_dim, num_layers, batch_first=True, bidirectional=bidirectional)
self.dropout = nn.Dropout(0.5) # Reintroduce dropout
self.fc1 = nn.Linear(hidden_dim * (2 if bidirectional else 1), hidden_dim) # Intermediate layer
self.fc2 = nn.Linear(hidden_dim, num_classes) # Output layer
def forward(self, x):
x = x.unsqueeze(1) # Add sequence dimension
x, _ = self.feature_extractor(x)
x = x.squeeze(1) # Remove sequence dimension
x = self.fc1(x)
x = self.dropout(x) # Apply dropout
x = self.fc2(x)
return x
import torch
from transformers import AutoTokenizer, AutoModel
from pathlib import Path
def infer(text, model_path, embedding_model_name):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load tokenizer and embedding model
tokenizer = AutoTokenizer.from_pretrained(embedding_model_name)
embedding_model = AutoModel.from_pretrained(embedding_model_name).to(device)
embedding_model.eval()
# Prepare inputs
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
inputs = {k: v.to(device) for k, v in inputs.items()}
# Generate embeddings
with torch.no_grad():
embeddings = embedding_model(**inputs)[0][:, 0]
# Load classifier model
model = CodeClassifier(num_classes=31, embedding_dim=embeddings.size(-1), hidden_dim=64, num_layers=2, bidirectional=True)
model.load_state_dict(torch.load(model_path, map_location=device))
model = model.to(device)
model.eval()
# Predict class
with torch.no_grad():
output = model(embeddings)
_, predicted = torch.max(output, dim=1)
# Language labels
languages = [
'Ada', 'Assembly', 'C', 'C#', 'C++', 'COBOL', 'Common Lisp', 'Dart', 'Erlang', 'F#',
'Fortran', 'Go', 'Haskell', 'Java', 'JavaScript', 'Julia', 'Kotlin', 'Lua', 'MATLAB',
'Objective-C', 'PHP', 'Perl', 'Prolog', 'Python', 'R', 'Ruby', 'Rust', 'SQL', 'Scala',
'Swift', 'TypeScript'
]
return languages[predicted.item()]
# Example usage
if __name__ == "__main__":
example_text = "print('Hello, world!')" # Replace with actual text for inference
model_file_path = Path("./model.safetensors")
predicted_language = infer(example_text, model_file_path, "Snowflake/snowflake-arctic-embed-xs")
print(f"Predicted programming language: {predicted_language}")
```
| {"license": "apache-2.0", "datasets": ["Novora/CodeClassifier_v1"], "pipeline_tag": "text-classification"} | Novora/CodeClassifier-v1-Tiny | null | [
"pytorch",
"safetensors",
"text-classification",
"dataset:Novora/CodeClassifier_v1",
"license:apache-2.0",
"region:eu"
] | null | 2024-04-29T12:05:08+00:00 | [] | [] | TAGS
#pytorch #safetensors #text-classification #dataset-Novora/CodeClassifier_v1 #license-apache-2.0 #region-eu
| Introduction
============
Novora Code Classifier v1 Tiny, is a tiny 'Text Classification' model, which classifies given code text input under 1 of '31' different classes (programming languages).
This model is designed to be able to run on CPU, but optimally runs on GPUs.
Info
====
* 1 of 31 classes output
* 512 token input dimension
* 64 hidden dimensions
* 2 linear layers
* The 'snowflake-arctic-embed-xs' model is used as the embeddings model.
* Dataset split into 80% training set, 20% testing set.
* The combined test and training data is around 1000 chunks per programming language, the data is 31,100 chunks (entries) as 512 tokens per chunk, being a snippet of the code.
* Picked from the 18th epoch out of 20 done.
Architecture
============
The 'CodeClassifier-v1-Tiny' model employs a neural network architecture optimized for text classification tasks, specifically for classifying programming languages from code snippets. This model includes:
* Bidirectional LSTM Feature Extractor: This bidirectional LSTM layer processes input embeddings, effectively capturing contextual relationships in both forward and reverse directions within the code snippets.
* Fully Connected Layers: The network includes two linear layers. The first projects the pooled features into a hidden feature space, and the second linear layer maps these to the output classes, which correspond to different programming languages. A dropout layer with a rate of 0.5 between these layers helps mitigate overfitting.
The model's bidirectional nature and architectural components make it adept at understanding the syntax and structure crucial for code classification.
Testing/Training Datasets
=========================
I have put here the samples entered into the training/testing pipeline, its a very small amount.
Language: Ada, Testing Count: 20, Training Count: 80
Language: Assembly, Testing Count: 20, Training Count: 80
Language: C, Testing Count: 20, Training Count: 80
Language: C#, Testing Count: 20, Training Count: 80
Language: C++, Testing Count: 20, Training Count: 80
Language: COBOL, Testing Count: 14, Training Count: 55
Language: Common Lisp, Testing Count: 20, Training Count: 80
Language: Dart, Testing Count: 20, Training Count: 80
Language: Erlang, Testing Count: 20, Training Count: 80
Language: F#, Testing Count: 20, Training Count: 80
Language: Go, Testing Count: 20, Training Count: 80
Language: Haskell, Testing Count: 20, Training Count: 80
Language: Java, Testing Count: 20, Training Count: 80
Language: JavaScript, Testing Count: 20, Training Count: 80
Language: Julia, Testing Count: 20, Training Count: 80
Language: Kotlin, Testing Count: 20, Training Count: 80
Language: Lua, Testing Count: 20, Training Count: 80
Language: MATLAB, Testing Count: 20, Training Count: 80
Language: PHP, Testing Count: 20, Training Count: 80
Language: Perl, Testing Count: 20, Training Count: 80
Language: Prolog, Testing Count: 1, Training Count: 4
Language: Python, Testing Count: 20, Training Count: 80
Language: R, Testing Count: 20, Training Count: 80
Language: Ruby, Testing Count: 20, Training Count: 80
Language: Rust, Testing Count: 20, Training Count: 80
Language: SQL, Testing Count: 20, Training Count: 80
Language: Scala, Testing Count: 20, Training Count: 80
Language: Swift, Testing Count: 20, Training Count: 80
Language: TypeScript, Testing Count: 20, Training Count: 80
Example Code
============
| [] | [
"TAGS\n#pytorch #safetensors #text-classification #dataset-Novora/CodeClassifier_v1 #license-apache-2.0 #region-eu \n"
] |
audio-to-audio | null | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
body {
font-family: Arial, sans-serif;
padding: 2rem;
color: #333;
}
.container {
max-width: 800px;
margin: 0 auto;
padding: 2rem;
border-radius: 5px;
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.1);
text-align: center;
}
h1 {
margin-bottom: 1.5rem;
font-size: 2.5rem;
}
h2 {
margin-bottom: 1rem;
font-size: 2rem;
}
ul {
list-style: none;
padding: 0;
margin: 0;
}
ul li {
margin-bottom: 0.5rem;
}
p {
margin-bottom: 1.5rem;
font-size: 1.1rem;
}
a {
color: #007bff;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
</style>
</head>
<body>
<div class="container">
<h1>Voice Conversion Hub: Discover Pretrained Models and More</h1>
<p>Welcome to our comprehensive repository, a treasure trove of pretrained models, HuBERT models, and an assortment of other files and models, all tailored for use in the Retrieval-based Voice Conversion (RVC) neural network.</p>
<hr style="border: none; height: 2px; background-color: #800080;">
<h2>Overview</h2>
<p>This repository is designed to be a one-stop-shop for all your RVC needs. It hosts a wide array of pretrained models, meticulously crafted to provide a robust foundation for your voice conversion tasks. The repository also includes a diverse range of HuBERT models, known for their proficiency in self-supervised speech representation learning.</p>
<hr style="border: none; height: 2px; background-color: #800080;">
<h2>Key Features</h2>
<ul>
<li><strong>Pretrained Models:</strong> A vast collection of pretrained models, ready to be fine-tuned for your specific voice conversion tasks. These models have been trained on diverse datasets, ensuring a broad spectrum of voice characteristics.</li>
<li><strong>HuBERT Models:</strong> A selection of HuBERT models, recognized for their ability to learn high-quality speech representations from raw audio data. These models are ideal for tasks that require a deep understanding of speech nuances.</li>
<li><strong>Additional Files and Models:</strong> A miscellaneous collection of files and models that can be beneficial for various aspects of voice conversion, from data preprocessing to model evaluation.</li>
</ul>
<hr style="border: none; height: 2px; background-color: #800080;">
<p>We invite you to explore this repository, leverage its resources, and contribute to the advancement of voice conversion technology. Whether you're a seasoned researcher or a budding enthusiast, we believe you'll find something of value here.</p>
<p>Happy exploring, and let's shape the future of voice conversion together!</p>
</div>
</body>
</html> | {"license": "mit", "tags": ["pretrained", "hubert", "RVC", "ai", "vits", "vc", "voice-cloning", "voice-conversion", "Voice2Voice"], "pipeline_tag": "audio-to-audio"} | Politrees/all_RVC-pretrained_and_other | null | [
"pretrained",
"hubert",
"RVC",
"ai",
"vits",
"vc",
"voice-cloning",
"voice-conversion",
"Voice2Voice",
"audio-to-audio",
"license:mit",
"region:us"
] | null | 2024-04-29T12:05:08+00:00 | [] | [] | TAGS
#pretrained #hubert #RVC #ai #vits #vc #voice-cloning #voice-conversion #Voice2Voice #audio-to-audio #license-mit #region-us
| <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
body {
font-family: Arial, sans-serif;
padding: 2rem;
color: #333;
}
.container {
max-width: 800px;
margin: 0 auto;
padding: 2rem;
border-radius: 5px;
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.1);
text-align: center;
}
h1 {
margin-bottom: 1.5rem;
font-size: 2.5rem;
}
h2 {
margin-bottom: 1rem;
font-size: 2rem;
}
ul {
list-style: none;
padding: 0;
margin: 0;
}
ul li {
margin-bottom: 0.5rem;
}
p {
margin-bottom: 1.5rem;
font-size: 1.1rem;
}
a {
color: #007bff;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
</style>
</head>
<body>
<div class="container">
<h1>Voice Conversion Hub: Discover Pretrained Models and More</h1>
<p>Welcome to our comprehensive repository, a treasure trove of pretrained models, HuBERT models, and an assortment of other files and models, all tailored for use in the Retrieval-based Voice Conversion (RVC) neural network.</p>
<hr style="border: none; height: 2px; background-color: #800080;">
<h2>Overview</h2>
<p>This repository is designed to be a one-stop-shop for all your RVC needs. It hosts a wide array of pretrained models, meticulously crafted to provide a robust foundation for your voice conversion tasks. The repository also includes a diverse range of HuBERT models, known for their proficiency in self-supervised speech representation learning.</p>
<hr style="border: none; height: 2px; background-color: #800080;">
<h2>Key Features</h2>
<ul>
<li><strong>Pretrained Models:</strong> A vast collection of pretrained models, ready to be fine-tuned for your specific voice conversion tasks. These models have been trained on diverse datasets, ensuring a broad spectrum of voice characteristics.</li>
<li><strong>HuBERT Models:</strong> A selection of HuBERT models, recognized for their ability to learn high-quality speech representations from raw audio data. These models are ideal for tasks that require a deep understanding of speech nuances.</li>
<li><strong>Additional Files and Models:</strong> A miscellaneous collection of files and models that can be beneficial for various aspects of voice conversion, from data preprocessing to model evaluation.</li>
</ul>
<hr style="border: none; height: 2px; background-color: #800080;">
<p>We invite you to explore this repository, leverage its resources, and contribute to the advancement of voice conversion technology. Whether you're a seasoned researcher or a budding enthusiast, we believe you'll find something of value here.</p>
<p>Happy exploring, and let's shape the future of voice conversion together!</p>
</div>
</body>
</html> | [] | [
"TAGS\n#pretrained #hubert #RVC #ai #vits #vc #voice-cloning #voice-conversion #Voice2Voice #audio-to-audio #license-mit #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GIT_inf_w_caption_blur_ep5_eval
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0688
- Rouge1: 11.5483
- Rouge2: 6.9038
- Rougel: 10.6731
- Rougelsum: 10.6966
- Gen Len: 217.74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.0775 | 1.0 | 1586 | 0.0774 | 11.837 | 5.6006 | 10.9944 | 11.0165 | 218.44 |
| 0.0658 | 2.0 | 3172 | 0.0726 | 9.7028 | 5.0964 | 9.0714 | 9.0844 | 218.44 |
| 0.0541 | 3.0 | 4758 | 0.0693 | 11.4449 | 6.3978 | 10.5899 | 10.6179 | 218.44 |
| 0.0432 | 4.0 | 6344 | 0.0682 | 11.1405 | 6.5221 | 10.3109 | 10.3318 | 218.39 |
| 0.0342 | 5.0 | 7930 | 0.0688 | 11.5483 | 6.9038 | 10.6731 | 10.6966 | 217.74 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "microsoft/git-base", "model-index": [{"name": "GIT_inf_w_caption_blur_ep5_eval", "results": []}]} | vishwa27/GIT_inf_w_caption_blur_ep5_eval | null | [
"transformers",
"safetensors",
"git",
"text-generation",
"generated_from_trainer",
"base_model:microsoft/git-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:05:45+00:00 | [] | [] | TAGS
#transformers #safetensors #git #text-generation #generated_from_trainer #base_model-microsoft/git-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| GIT\_inf\_w\_caption\_blur\_ep5\_eval
=====================================
This model is a fine-tuned version of microsoft/git-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0688
* Rouge1: 11.5483
* Rouge2: 6.9038
* Rougel: 10.6731
* Rougelsum: 10.6966
* Gen Len: 217.74
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.1
* Pytorch 2.2.1+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #git #text-generation #generated_from_trainer #base_model-microsoft/git-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** yadz45
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["fr"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | yadz45/IA_lora | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"fr",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-04-29T12:07:13+00:00 | [] | [
"fr"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #fr #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: yadz45
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: yadz45\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #fr #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: yadz45\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20240429_115939
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.12.0
- Tokenizers 0.15.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "20240429_115939", "results": []}]} | schoonhovenra/20240429_115939 | null | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:07:44+00:00 | [] | [] | TAGS
#transformers #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
|
# 20240429_115939
This model is a fine-tuned version of facebook/detr-resnet-50 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.12.0
- Tokenizers 0.15.1
| [
"# 20240429_115939\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 2.3.0\n- Datasets 2.12.0\n- Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# 20240429_115939\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 2.3.0\n- Datasets 2.12.0\n- Tokenizers 0.15.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the generated dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0133
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.15.0
- Tokenizers 0.19.1
| {"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["generated"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/layoutlmv3-base", "model-index": [{"name": "layoutlmv3-finetuned-invoice", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "generated", "type": "generated", "config": "sroie", "split": "test", "args": "sroie"}, "metrics": [{"type": "precision", "value": 0.0, "name": "Precision"}, {"type": "recall", "value": 0.0, "name": "Recall"}, {"type": "f1", "value": 0.0, "name": "F1"}, {"type": "accuracy", "value": 0.8750789972614282, "name": "Accuracy"}]}]}]} | ShinzHira123/layoutlmv3-finetuned-invoice | null | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:generated",
"base_model:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:09:02+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #layoutlmv3 #token-classification #generated_from_trainer #dataset-generated #base_model-microsoft/layoutlmv3-base #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of microsoft/layoutlmv3-base on the generated dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0133
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.15.0
- Tokenizers 0.19.1
| [
"# layoutlmv3-finetuned-invoice\n\nThis model is a fine-tuned version of microsoft/layoutlmv3-base on the generated dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0133\n- Precision: 0.0\n- Recall: 0.0\n- F1: 0.0\n- Accuracy: 0.8751",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 20",
"### Training results",
"### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.15.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #layoutlmv3 #token-classification #generated_from_trainer #dataset-generated #base_model-microsoft/layoutlmv3-base #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# layoutlmv3-finetuned-invoice\n\nThis model is a fine-tuned version of microsoft/layoutlmv3-base on the generated dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0133\n- Precision: 0.0\n- Recall: 0.0\n- F1: 0.0\n- Accuracy: 0.8751",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 20",
"### Training results",
"### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.15.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kyounghyun/EEVE2.8B_KO_Finetune_test | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-29T12:09:12+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: kloodia/raw_medic
type: oasst
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./lora-out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# lora-out
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3216 | 0.0 | 1 | 2.2561 |
| 1.7379 | 0.25 | 92 | 1.7855 |
| 1.6935 | 0.5 | 184 | 1.7075 |
| 1.7016 | 0.75 | 276 | 1.6663 |
| 1.5761 | 1.0 | 368 | 1.6371 |
| 1.4785 | 1.23 | 460 | 1.6220 |
| 1.4492 | 1.49 | 552 | 1.6023 |
| 1.6224 | 1.74 | 644 | 1.5887 |
| 1.5154 | 1.99 | 736 | 1.5789 |
| 1.4758 | 2.22 | 828 | 1.5787 |
| 1.4005 | 2.47 | 920 | 1.5758 |
| 1.458 | 2.72 | 1012 | 1.5741 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0 | {"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "lora-out", "results": []}]} | kloodia/lora-8b-medic | null | [
"peft",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"8-bit",
"region:us"
] | null | 2024-04-29T12:09:31+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #llama #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #8-bit #region-us
| <img src="URL alt="Built with Axolotl" width="200" height="32"/>
See axolotl config
axolotl version: '0.4.0'
lora-out
========
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5741
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* total\_eval\_batch\_size: 4
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 10
* num\_epochs: 3
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0.dev0
* Pytorch 2.1.2+cu118
* Datasets 2.15.0
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.2+cu118\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#peft #tensorboard #safetensors #llama #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #8-bit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.2+cu118\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] |
null | null | # Only a Matter of Style: Age Transformation Using a Style-Based Regression Model (SIGGRAPH 2021)
> The task of age transformation illustrates the change of an individual's appearance over time. Accurately modeling this complex transformation over an input facial image is extremely challenging as it requires making convincing and possibly large changes to facial features and head shape, while still preserving the input identity. In this work, we present an image-to-image translation method that learns to directly encode real facial images into the latent space of a pre-trained unconditional GAN (e.g., StyleGAN) subject to a given aging shift. We employ a pre-trained age regression network used to explicitly guide the encoder to generate the latent codes corresponding to the desired age. In this formulation, our method approaches the continuous aging process as a regression task between the input age and desired target age, providing fine-grained control on the generated image. Moreover, unlike other approaches that operate solely in the latent space using a prior on the path controlling age, our method learns a more disentangled, non-linear path. We demonstrate that the end-to-end nature of our approach, coupled with the rich semantic latent space of StyleGAN, allows for further editing of the generated images. Qualitative and quantitative evaluations show the advantages of our method compared to state-of-the-art approaches.
<a href="https://arxiv.org/abs/2102.02754"><img src="https://img.shields.io/badge/arXiv-2008.00951-b31b1b.svg" height=22.5></a>
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" height=22.5></a>
<a href="https://www.youtube.com/watch?v=zDTUbtmUbG8"><img src="https://img.shields.io/static/v1?label=Two Minute Papers&message=SAM Video&color=red" height=22.5></a>
<a href="https://youtu.be/X_pYC_LtBFw"><img src="https://img.shields.io/static/v1?label=SIGGRAPH 2021 &message=5 Minute Video&color=red" height=22.5></a>
<a href="https://replicate.ai/yuval-alaluf/sam"><img src="https://img.shields.io/static/v1?label=Replicate&message=Demo and Docker Image&color=darkgreen" height=22.5></a>
Inference Notebook: <a href="http://colab.research.google.com/github/yuval-alaluf/SAM/blob/master/notebooks/inference_playground.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" height=22.5></a>
Animation Notebook: <a href="http://colab.research.google.com/github/yuval-alaluf/SAM/blob/master/notebooks/animation_inference_playground.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" height=22.5></a>
<p align="center">
<img src="docs/teaser.jpeg" width="800px"/>
</p>
## Description
Official Implementation of our Style-based Age Manipulation (SAM) paper for both training and evaluation. SAM
allows modeling fine-grained age transformation using a single input facial image
<p align="center">
<img src="docs/2195.jpg" width="800px"/>
<img src="docs/1936.jpg" width="800px"/>
</p>
## Getting Started
### Prerequisites
- Linux or macOS
- NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)
- Python 3
### Installation
- Dependencies:
We recommend running this repository using [Anaconda](https://docs.anaconda.com/anaconda/install/).
All dependencies for defining the environment are provided in `environment/sam_env.yaml`.
## Pretrained Models
Please download the pretrained aging model from the following links.
| Path | Description
| :--- | :----------
|[SAM](https://drive.google.com/file/d/1XyumF6_fdAxFmxpFcmPf-q84LU_22EMC/view?usp=sharing) | SAM trained on the FFHQ dataset for age transformation.
You can run this to download it to the right place:
```
mkdir pretrained_models
pip install gdown
gdown "https://drive.google.com/u/0/uc?id=1XyumF6_fdAxFmxpFcmPf-q84LU_22EMC&export=download" -O pretrained_models/sam_ffhq_aging.pt
wget "https://github.com/italojs/facial-landmarks-recognition/raw/master/shape_predictor_68_face_landmarks.dat"
```
In addition, we provide various auxiliary models needed for training your own SAM model from scratch.
This includes the pretrained pSp encoder model for generating the encodings of the input image and the aging classifier
used to compute the aging loss during training.
| Path | Description
| :--- | :----------
|[pSp Encoder](https://drive.google.com/file/d/1bMTNWkh5LArlaWSc_wa8VKyq2V42T2z0/view?usp=sharing) | pSp taken from [pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel) trained on the FFHQ dataset for StyleGAN inversion.
|[FFHQ StyleGAN](https://drive.google.com/file/d/1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT/view?usp=sharing) | StyleGAN model pretrained on FFHQ taken from [rosinality](https://github.com/rosinality/stylegan2-pytorch) with 1024x1024 output resolution.
|[IR-SE50 Model](https://drive.google.com/file/d/1KW7bjndL3QG3sxBbZxreGHigcCCpsDgn/view?usp=sharing) | Pretrained IR-SE50 model taken from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) for use in our ID loss during training.
|[VGG Age Classifier](https://drive.google.com/file/d/1atzjZm_dJrCmFWCqWlyspSpr3nI6Evsh/view?usp=sharing) | VGG age classifier from DEX and fine-tuned on the FFHQ-Aging dataset for use in our aging loss
By default, we assume that all auxiliary models are downloaded and saved to the directory `pretrained_models`.
However, you may use your own paths by changing the necessary values in `configs/path_configs.py`.
## Training
### Preparing your Data
Please refer to `configs/paths_config.py` to define the necessary data paths and model paths for training and inference.
Then, refer to `configs/data_configs.py` to define the source/target data paths for the train and test sets as well as the
transforms to be used for training and inference.
As an example, we can first go to `configs/paths_config.py` and define:
```
dataset_paths = {
'ffhq': '/path/to/ffhq/images256x256'
'celeba_test': '/path/to/CelebAMask-HQ/test_img',
}
```
Then, in `configs/data_configs.py`, we define:
```
DATASETS = {
'ffhq_aging': {
'transforms': transforms_config.AgingTransforms,
'train_source_root': dataset_paths['ffhq'],
'train_target_root': dataset_paths['ffhq'],
'test_source_root': dataset_paths['celeba_test'],
'test_target_root': dataset_paths['celeba_test'],
}
}
```
When defining the datasets for training and inference, we will use the values defined in the above dictionary.
### Training SAM
The main training script can be found in `scripts/train.py`.
Intermediate training results are saved to `opts.exp_dir`. This includes checkpoints, train outputs, and test outputs.
Additionally, if you have tensorboard installed, you can visualize tensorboard logs in `opts.exp_dir/logs`.
Training SAM with the settings used in the paper can be done by running the following command:
```
python scripts/train.py \
--dataset_type=ffhq_aging \
--exp_dir=/path/to/experiment \
--workers=6 \
--batch_size=6 \
--test_batch_size=6 \
--test_workers=6 \
--val_interval=2500 \
--save_interval=10000 \
--start_from_encoded_w_plus \
--id_lambda=0.1 \
--lpips_lambda=0.1 \
--lpips_lambda_aging=0.1 \
--lpips_lambda_crop=0.6 \
--l2_lambda=0.25 \
--l2_lambda_aging=0.25 \
--l2_lambda_crop=1 \
--w_norm_lambda=0.005 \
--aging_lambda=5 \
--cycle_lambda=1 \
--input_nc=4 \
--target_age=uniform_random \
--use_weighted_id_loss
```
### Additional Notes
- See `options/train_options.py` for all training-specific flags.
- Note that using the flag `--start_from_encoded_w_plus` requires you to specify the path to the pretrained pSp encoder.
By default, this path is taken from `configs.paths_config.model_paths['pretrained_psp']`.
- If you wish to resume from a specific checkpoint (e.g. a pretrained SAM model), you may do so using `--checkpoint_path`.
## Notebooks
### Inference Notebook
To help visualize the results of SAM we provide a Jupyter notebook found in `notebooks/inference_playground.ipynb`.
The notebook will download the pretrained aging model and run inference on the images found in `notebooks/images`.
In addition, [Replicate](https://replicate.ai/) have created a demo for SAM where you can easily upload an image and run SAM on a desired set of ages! Check
out the demo [here](https://replicate.ai/yuval-alaluf/sam).
### MP4 Notebook
To show full lifespan results using SAM we provide an additional notebook `notebooks/animation_inference_playground.ipynb` that will
run aging on multiple ages between 0 and 100 and interpolate between the results to display full aging.
The results will be saved as an MP4 files in `notebooks/animations` showing the aging and de-aging results.
## Testing
### Inference
Having trained your model or if you're using a pretrained SAM model, you can use `scripts/inference.py` to run inference
on a set of images.
For example,
```
python scripts/inference.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--couple_outputs
--target_age=0,10,20,30,40,50,60,70,80
```
Additional notes to consider:
- During inference, the options used during training are loaded from the saved checkpoint and are then updated using the
test options passed to the inference script.
- Adding the flag `--couple_outputs` will save an additional image containing the input and output images side-by-side in the sub-directory
`inference_coupled`. Otherwise, only the output image is saved to the sub-directory `inference_results`.
- In the above example, we will run age transformation with target ages 0,10,...,80.
- The results of each target age are saved to the sub-directories `inference_results/TARGET_AGE` and `inference_coupled/TARGET_AGE`.
- By default, the images will be saved at resolution of 1024x1024, the original output size of StyleGAN.
- If you wish to save outputs resized to resolutions of 256x256, you can do so by adding the flag `--resize_outputs`.
### Side-by-Side Inference
The above inference script will save each aging result in a different sub-directory for each target age. Sometimes,
however, it is more convenient to save all aging results of a given input side-by-side like the following:
<p align="center">
<img src="docs/866.jpg" width="800px"/>
</p>
To do so, we provide a script `inference_side_by_side.py` that works in a similar manner as the regular inference script:
```
python scripts/inference_side_by_side.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--target_age=0,10,20,30,40,50,60,70,80
```
Here, all aging results 0,10,...,80 will be save side-by-side with the original input image.
### Reference-Guided Inference
In the paper, we demonstrated how one can perform style-mixing on the fine-level style inputs with a reference image
to control global features such as hair color. For example,
<p align="center">
<img src="docs/1005_style_mixing.jpg" width="800px"/>
</p>
To perform style mixing using reference images, we provide the script `reference_guided_inference.py`. Here,
we first perform aging using the specified target age(s). Then, style mixing is performed using the specified
reference images and the specified layers. For example, one can run:
```
python scripts/reference_guided_inference.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--ref_images_paths_file=/path/to/ref_list.txt \
--latent_mask=8,9 \
--target_age=50,60,70,80
```
Here, the reference images should be specified in the file defined by `--ref_images_paths_file` and should have the
following format:
```
/path/to/reference/1.jpg
/path/to/reference/2.jpg
/path/to/reference/3.jpg
/path/to/reference/4.jpg
/path/to/reference/5.jpg
```
In the above example, we will aging using 4 different target ages. For each target age, we first transform the
test samples defined by `--data_path` and then perform style mixing on layers 8,9 defined by `--latent_mask`.
The results of each target age are saved in its own sub-directory.
### Style Mixing
Instead of performing style mixing using a reference image, you can perform style mixing using randomly generated
w latent vectors by running the script `style_mixing.py`. This script works in a similar manner to the reference
guided inference except you do not need to specify the `--ref_images_paths_file` flag.
## Repository structure
| Path | Description <img width=200>
| :--- | :---
| SAM | Repository root folder
| ├ configs | Folder containing configs defining model/data paths and data transforms
| ├ criteria | Folder containing various loss criterias for training
| ├ datasets | Folder with various dataset objects and augmentations
| ├ docs | Folder containing images displayed in the README
| ├ environment | Folder containing Anaconda environment used in our experiments
| ├ models | Folder containing all the models and training objects
| │ ├ encoders | Folder containing various architecture implementations
| │ ├ stylegan2 | StyleGAN2 model from [rosinality](https://github.com/rosinality/stylegan2-pytorch)
| │ ├ psp.py | Implementation of pSp encoder
| │ └ dex_vgg.py | Implementation of DEX VGG classifier used in computation of aging loss
| ├ notebook | Folder with jupyter notebook containing SAM inference playground
| ├ options | Folder with training and test command-line options
| ├ scripts | Folder with running scripts for training and inference
| ├ training | Folder with main training logic and Ranger implementation from [lessw2020](https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer)
| ├ utils | Folder with various utility functions
| <img width=300> | <img>
## Credits
**StyleGAN2 model and implementation:**
https://github.com/rosinality/stylegan2-pytorch
Copyright (c) 2019 Kim Seonghyeon
License (MIT) https://github.com/rosinality/stylegan2-pytorch/blob/master/LICENSE
**IR-SE50 model and implementations:**
https://github.com/TreB1eN/InsightFace_Pytorch
Copyright (c) 2018 TreB1eN
License (MIT) https://github.com/TreB1eN/InsightFace_Pytorch/blob/master/LICENSE
**Ranger optimizer implementation:**
https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer
License (Apache License 2.0) https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer/blob/master/LICENSE
**LPIPS model and implementation:**
https://github.com/S-aiueo32/lpips-pytorch
Copyright (c) 2020, Sou Uchida
License (BSD 2-Clause) https://github.com/S-aiueo32/lpips-pytorch/blob/master/LICENSE
**DEX VGG model and implementation:**
https://github.com/InterDigitalInc/HRFAE
Copyright (c) 2020, InterDigital R&D France
https://github.com/InterDigitalInc/HRFAE/blob/master/LICENSE.txt
**pSp model and implementation:**
https://github.com/eladrich/pixel2style2pixel
Copyright (c) 2020 Elad Richardson, Yuval Alaluf
https://github.com/eladrich/pixel2style2pixel/blob/master/LICENSE
## Acknowledgments
This code borrows heavily from [pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel)
## Citation
If you use this code for your research, please cite our paper <a href="https://arxiv.org/abs/2102.02754">Only a Matter of Style: Age Transformation Using a Style-Based Regression Model</a>:
```
@article{alaluf2021matter,
author = {Alaluf, Yuval and Patashnik, Or and Cohen-Or, Daniel},
title = {Only a Matter of Style: Age Transformation Using a Style-Based Regression Model},
journal = {ACM Trans. Graph.},
issue_date = {August 2021},
volume = {40},
number = {4},
year = {2021},
articleno = {45},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3450626.3459805}
}
```
| {} | deneesk/sam-model | null | [
"arxiv:2102.02754",
"region:us"
] | null | 2024-04-29T12:10:26+00:00 | [
"2102.02754"
] | [] | TAGS
#arxiv-2102.02754 #region-us
| Only a Matter of Style: Age Transformation Using a Style-Based Regression Model (SIGGRAPH 2021)
===============================================================================================
>
> The task of age transformation illustrates the change of an individual's appearance over time. Accurately modeling this complex transformation over an input facial image is extremely challenging as it requires making convincing and possibly large changes to facial features and head shape, while still preserving the input identity. In this work, we present an image-to-image translation method that learns to directly encode real facial images into the latent space of a pre-trained unconditional GAN (e.g., StyleGAN) subject to a given aging shift. We employ a pre-trained age regression network used to explicitly guide the encoder to generate the latent codes corresponding to the desired age. In this formulation, our method approaches the continuous aging process as a regression task between the input age and desired target age, providing fine-grained control on the generated image. Moreover, unlike other approaches that operate solely in the latent space using a prior on the path controlling age, our method learns a more disentangled, non-linear path. We demonstrate that the end-to-end nature of our approach, coupled with the rich semantic latent space of StyleGAN, allows for further editing of the generated images. Qualitative and quantitative evaluations show the advantages of our method compared to state-of-the-art approaches.
>
>
>
<a href="URL src="URL height=22.5>
<a href="URL src="URL height=22.5>
<a href="URL src="URL Minute Papers&message=SAM Video&color=red" height=22.5>
<a href="URL src="URL 2021 &message=5 Minute Video&color=red" height=22.5>
<a href="URL src="URL and Docker Image&color=darkgreen" height=22.5>
Inference Notebook: Β <a href="URL src="URL height=22.5>
Animation Notebook: <a href="URL src="URL height=22.5>

Description
-----------
Official Implementation of our Style-based Age Manipulation (SAM) paper for both training and evaluation. SAM
allows modeling fine-grained age transformation using a single input facial image


Getting Started
---------------
### Prerequisites
* Linux or macOS
* NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)
* Python 3
### Installation
* Dependencies:
We recommend running this repository using Anaconda.
All dependencies for defining the environment are provided in 'environment/sam\_env.yaml'.
Pretrained Models
-----------------
Please download the pretrained aging model from the following links.
You can run this to download it to the right place:
In addition, we provide various auxiliary models needed for training your own SAM model from scratch.
This includes the pretrained pSp encoder model for generating the encodings of the input image and the aging classifier
used to compute the aging loss during training.
By default, we assume that all auxiliary models are downloaded and saved to the directory 'pretrained\_models'.
However, you may use your own paths by changing the necessary values in 'configs/path\_configs.py'.
Training
--------
### Preparing your Data
Please refer to 'configs/paths\_config.py' to define the necessary data paths and model paths for training and inference.
Then, refer to 'configs/data\_configs.py' to define the source/target data paths for the train and test sets as well as the
transforms to be used for training and inference.
As an example, we can first go to 'configs/paths\_config.py' and define:
Then, in 'configs/data\_configs.py', we define:
When defining the datasets for training and inference, we will use the values defined in the above dictionary.
### Training SAM
The main training script can be found in 'scripts/URL'.
Intermediate training results are saved to 'opts.exp\_dir'. This includes checkpoints, train outputs, and test outputs.
Additionally, if you have tensorboard installed, you can visualize tensorboard logs in 'opts.exp\_dir/logs'.
Training SAM with the settings used in the paper can be done by running the following command:
### Additional Notes
* See 'options/train\_options.py' for all training-specific flags.
* Note that using the flag '--start\_from\_encoded\_w\_plus' requires you to specify the path to the pretrained pSp encoder.
By default, this path is taken from 'configs.paths\_config.model\_paths['pretrained\_psp']'.
* If you wish to resume from a specific checkpoint (e.g. a pretrained SAM model), you may do so using '--checkpoint\_path'.
Notebooks
---------
### Inference Notebook
To help visualize the results of SAM we provide a Jupyter notebook found in 'notebooks/inference\_playground.ipynb'.
The notebook will download the pretrained aging model and run inference on the images found in 'notebooks/images'.
In addition, Replicate have created a demo for SAM where you can easily upload an image and run SAM on a desired set of ages! Check
out the demo here.
### MP4 Notebook
To show full lifespan results using SAM we provide an additional notebook 'notebooks/animation\_inference\_playground.ipynb' that will
run aging on multiple ages between 0 and 100 and interpolate between the results to display full aging.
The results will be saved as an MP4 files in 'notebooks/animations' showing the aging and de-aging results.
Testing
-------
### Inference
Having trained your model or if you're using a pretrained SAM model, you can use 'scripts/URL' to run inference
on a set of images.
For example,
Additional notes to consider:
* During inference, the options used during training are loaded from the saved checkpoint and are then updated using the
test options passed to the inference script.
* Adding the flag '--couple\_outputs' will save an additional image containing the input and output images side-by-side in the sub-directory
'inference\_coupled'. Otherwise, only the output image is saved to the sub-directory 'inference\_results'.
* In the above example, we will run age transformation with target ages 0,10,...,80.
+ The results of each target age are saved to the sub-directories 'inference\_results/TARGET\_AGE' and 'inference\_coupled/TARGET\_AGE'.
* By default, the images will be saved at resolution of 1024x1024, the original output size of StyleGAN.
+ If you wish to save outputs resized to resolutions of 256x256, you can do so by adding the flag '--resize\_outputs'.
### Side-by-Side Inference
The above inference script will save each aging result in a different sub-directory for each target age. Sometimes,
however, it is more convenient to save all aging results of a given input side-by-side like the following:

To do so, we provide a script 'inference\_side\_by\_side.py' that works in a similar manner as the regular inference script:
Here, all aging results 0,10,...,80 will be save side-by-side with the original input image.
### Reference-Guided Inference
In the paper, we demonstrated how one can perform style-mixing on the fine-level style inputs with a reference image
to control global features such as hair color. For example,

To perform style mixing using reference images, we provide the script 'reference\_guided\_inference.py'. Here,
we first perform aging using the specified target age(s). Then, style mixing is performed using the specified
reference images and the specified layers. For example, one can run:
Here, the reference images should be specified in the file defined by '--ref\_images\_paths\_file' and should have the
following format:
In the above example, we will aging using 4 different target ages. For each target age, we first transform the
test samples defined by '--data\_path' and then perform style mixing on layers 8,9 defined by '--latent\_mask'.
The results of each target age are saved in its own sub-directory.
### Style Mixing
Instead of performing style mixing using a reference image, you can perform style mixing using randomly generated
w latent vectors by running the script 'style\_mixing.py'. This script works in a similar manner to the reference
guided inference except you do not need to specify the '--ref\_images\_paths\_file' flag.
Repository structure
--------------------
Credits
-------
StyleGAN2 model and implementation:
URL
Copyright (c) 2019 Kim Seonghyeon
License (MIT) URL
IR-SE50 model and implementations:
URL
Copyright (c) 2018 TreB1eN
License (MIT) URL
Ranger optimizer implementation:
URL
License (Apache License 2.0) URL
LPIPS model and implementation:
URL
Copyright (c) 2020, Sou Uchida
License (BSD 2-Clause) URL
DEX VGG model and implementation:
URL
Copyright (c) 2020, InterDigital R&D France
URL
pSp model and implementation:
URL
Copyright (c) 2020 Elad Richardson, Yuval Alaluf
URL
Acknowledgments
---------------
This code borrows heavily from pixel2style2pixel
If you use this code for your research, please cite our paper <a href="URL a Matter of Style: Age Transformation Using a Style-Based Regression Model:
| [
"### Prerequisites\n\n\n* Linux or macOS\n* NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)\n* Python 3",
"### Installation\n\n\n* Dependencies: \n\nWe recommend running this repository using Anaconda.\nAll dependencies for defining the environment are provided in 'environment/sam\\_env.yaml'.\n\n\nPretrained Models\n-----------------\n\n\nPlease download the pretrained aging model from the following links.\n\n\n\nYou can run this to download it to the right place:\n\n\nIn addition, we provide various auxiliary models needed for training your own SAM model from scratch. \n\nThis includes the pretrained pSp encoder model for generating the encodings of the input image and the aging classifier\nused to compute the aging loss during training.\n\n\n\nBy default, we assume that all auxiliary models are downloaded and saved to the directory 'pretrained\\_models'.\nHowever, you may use your own paths by changing the necessary values in 'configs/path\\_configs.py'.\n\n\nTraining\n--------",
"### Preparing your Data\n\n\nPlease refer to 'configs/paths\\_config.py' to define the necessary data paths and model paths for training and inference. \n\nThen, refer to 'configs/data\\_configs.py' to define the source/target data paths for the train and test sets as well as the\ntransforms to be used for training and inference.\n\n\nAs an example, we can first go to 'configs/paths\\_config.py' and define:\n\n\nThen, in 'configs/data\\_configs.py', we define:\n\n\nWhen defining the datasets for training and inference, we will use the values defined in the above dictionary.",
"### Training SAM\n\n\nThe main training script can be found in 'scripts/URL'. \n\nIntermediate training results are saved to 'opts.exp\\_dir'. This includes checkpoints, train outputs, and test outputs. \n\nAdditionally, if you have tensorboard installed, you can visualize tensorboard logs in 'opts.exp\\_dir/logs'.\n\n\nTraining SAM with the settings used in the paper can be done by running the following command:",
"### Additional Notes\n\n\n* See 'options/train\\_options.py' for all training-specific flags.\n* Note that using the flag '--start\\_from\\_encoded\\_w\\_plus' requires you to specify the path to the pretrained pSp encoder. \n\nBy default, this path is taken from 'configs.paths\\_config.model\\_paths['pretrained\\_psp']'.\n* If you wish to resume from a specific checkpoint (e.g. a pretrained SAM model), you may do so using '--checkpoint\\_path'.\n\n\nNotebooks\n---------",
"### Inference Notebook\n\n\nTo help visualize the results of SAM we provide a Jupyter notebook found in 'notebooks/inference\\_playground.ipynb'. \n\nThe notebook will download the pretrained aging model and run inference on the images found in 'notebooks/images'.\n\n\nIn addition, Replicate have created a demo for SAM where you can easily upload an image and run SAM on a desired set of ages! Check\nout the demo here.",
"### MP4 Notebook\n\n\nTo show full lifespan results using SAM we provide an additional notebook 'notebooks/animation\\_inference\\_playground.ipynb' that will\nrun aging on multiple ages between 0 and 100 and interpolate between the results to display full aging.\nThe results will be saved as an MP4 files in 'notebooks/animations' showing the aging and de-aging results.\n\n\nTesting\n-------",
"### Inference\n\n\nHaving trained your model or if you're using a pretrained SAM model, you can use 'scripts/URL' to run inference\non a set of images. \n\nFor example,\n\n\nAdditional notes to consider:\n\n\n* During inference, the options used during training are loaded from the saved checkpoint and are then updated using the\ntest options passed to the inference script.\n* Adding the flag '--couple\\_outputs' will save an additional image containing the input and output images side-by-side in the sub-directory\n'inference\\_coupled'. Otherwise, only the output image is saved to the sub-directory 'inference\\_results'.\n* In the above example, we will run age transformation with target ages 0,10,...,80.\n\t+ The results of each target age are saved to the sub-directories 'inference\\_results/TARGET\\_AGE' and 'inference\\_coupled/TARGET\\_AGE'.\n* By default, the images will be saved at resolution of 1024x1024, the original output size of StyleGAN.\n\t+ If you wish to save outputs resized to resolutions of 256x256, you can do so by adding the flag '--resize\\_outputs'.",
"### Side-by-Side Inference\n\n\nThe above inference script will save each aging result in a different sub-directory for each target age. Sometimes,\nhowever, it is more convenient to save all aging results of a given input side-by-side like the following:\n\n\n\n\n\n\n\nTo do so, we provide a script 'inference\\_side\\_by\\_side.py' that works in a similar manner as the regular inference script:\n\n\nHere, all aging results 0,10,...,80 will be save side-by-side with the original input image.",
"### Reference-Guided Inference\n\n\nIn the paper, we demonstrated how one can perform style-mixing on the fine-level style inputs with a reference image\nto control global features such as hair color. For example,\n\n\n\n\n\n\n\nTo perform style mixing using reference images, we provide the script 'reference\\_guided\\_inference.py'. Here,\nwe first perform aging using the specified target age(s). Then, style mixing is performed using the specified\nreference images and the specified layers. For example, one can run:\n\n\nHere, the reference images should be specified in the file defined by '--ref\\_images\\_paths\\_file' and should have the\nfollowing format:\n\n\nIn the above example, we will aging using 4 different target ages. For each target age, we first transform the\ntest samples defined by '--data\\_path' and then perform style mixing on layers 8,9 defined by '--latent\\_mask'.\nThe results of each target age are saved in its own sub-directory.",
"### Style Mixing\n\n\nInstead of performing style mixing using a reference image, you can perform style mixing using randomly generated\nw latent vectors by running the script 'style\\_mixing.py'. This script works in a similar manner to the reference\nguided inference except you do not need to specify the '--ref\\_images\\_paths\\_file' flag.\n\n\nRepository structure\n--------------------\n\n\n\nCredits\n-------\n\n\nStyleGAN2 model and implementation: \n\nURL \n\nCopyright (c) 2019 Kim Seonghyeon \n\nLicense (MIT) URL\n\n\nIR-SE50 model and implementations: \n\nURL \n\nCopyright (c) 2018 TreB1eN \n\nLicense (MIT) URL\n\n\nRanger optimizer implementation: \n\nURL \n\nLicense (Apache License 2.0) URL\n\n\nLPIPS model and implementation: \n\nURL \n\nCopyright (c) 2020, Sou Uchida \n\nLicense (BSD 2-Clause) URL\n\n\nDEX VGG model and implementation: \n\nURL \n\nCopyright (c) 2020, InterDigital R&D France \n\nURL\n\n\npSp model and implementation: \n\nURL \n\nCopyright (c) 2020 Elad Richardson, Yuval Alaluf \n\nURL\n\n\nAcknowledgments\n---------------\n\n\nThis code borrows heavily from pixel2style2pixel\n\n\nIf you use this code for your research, please cite our paper <a href=\"URL a Matter of Style: Age Transformation Using a Style-Based Regression Model:"
] | [
"TAGS\n#arxiv-2102.02754 #region-us \n",
"### Prerequisites\n\n\n* Linux or macOS\n* NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)\n* Python 3",
"### Installation\n\n\n* Dependencies: \n\nWe recommend running this repository using Anaconda.\nAll dependencies for defining the environment are provided in 'environment/sam\\_env.yaml'.\n\n\nPretrained Models\n-----------------\n\n\nPlease download the pretrained aging model from the following links.\n\n\n\nYou can run this to download it to the right place:\n\n\nIn addition, we provide various auxiliary models needed for training your own SAM model from scratch. \n\nThis includes the pretrained pSp encoder model for generating the encodings of the input image and the aging classifier\nused to compute the aging loss during training.\n\n\n\nBy default, we assume that all auxiliary models are downloaded and saved to the directory 'pretrained\\_models'.\nHowever, you may use your own paths by changing the necessary values in 'configs/path\\_configs.py'.\n\n\nTraining\n--------",
"### Preparing your Data\n\n\nPlease refer to 'configs/paths\\_config.py' to define the necessary data paths and model paths for training and inference. \n\nThen, refer to 'configs/data\\_configs.py' to define the source/target data paths for the train and test sets as well as the\ntransforms to be used for training and inference.\n\n\nAs an example, we can first go to 'configs/paths\\_config.py' and define:\n\n\nThen, in 'configs/data\\_configs.py', we define:\n\n\nWhen defining the datasets for training and inference, we will use the values defined in the above dictionary.",
"### Training SAM\n\n\nThe main training script can be found in 'scripts/URL'. \n\nIntermediate training results are saved to 'opts.exp\\_dir'. This includes checkpoints, train outputs, and test outputs. \n\nAdditionally, if you have tensorboard installed, you can visualize tensorboard logs in 'opts.exp\\_dir/logs'.\n\n\nTraining SAM with the settings used in the paper can be done by running the following command:",
"### Additional Notes\n\n\n* See 'options/train\\_options.py' for all training-specific flags.\n* Note that using the flag '--start\\_from\\_encoded\\_w\\_plus' requires you to specify the path to the pretrained pSp encoder. \n\nBy default, this path is taken from 'configs.paths\\_config.model\\_paths['pretrained\\_psp']'.\n* If you wish to resume from a specific checkpoint (e.g. a pretrained SAM model), you may do so using '--checkpoint\\_path'.\n\n\nNotebooks\n---------",
"### Inference Notebook\n\n\nTo help visualize the results of SAM we provide a Jupyter notebook found in 'notebooks/inference\\_playground.ipynb'. \n\nThe notebook will download the pretrained aging model and run inference on the images found in 'notebooks/images'.\n\n\nIn addition, Replicate have created a demo for SAM where you can easily upload an image and run SAM on a desired set of ages! Check\nout the demo here.",
"### MP4 Notebook\n\n\nTo show full lifespan results using SAM we provide an additional notebook 'notebooks/animation\\_inference\\_playground.ipynb' that will\nrun aging on multiple ages between 0 and 100 and interpolate between the results to display full aging.\nThe results will be saved as an MP4 files in 'notebooks/animations' showing the aging and de-aging results.\n\n\nTesting\n-------",
"### Inference\n\n\nHaving trained your model or if you're using a pretrained SAM model, you can use 'scripts/URL' to run inference\non a set of images. \n\nFor example,\n\n\nAdditional notes to consider:\n\n\n* During inference, the options used during training are loaded from the saved checkpoint and are then updated using the\ntest options passed to the inference script.\n* Adding the flag '--couple\\_outputs' will save an additional image containing the input and output images side-by-side in the sub-directory\n'inference\\_coupled'. Otherwise, only the output image is saved to the sub-directory 'inference\\_results'.\n* In the above example, we will run age transformation with target ages 0,10,...,80.\n\t+ The results of each target age are saved to the sub-directories 'inference\\_results/TARGET\\_AGE' and 'inference\\_coupled/TARGET\\_AGE'.\n* By default, the images will be saved at resolution of 1024x1024, the original output size of StyleGAN.\n\t+ If you wish to save outputs resized to resolutions of 256x256, you can do so by adding the flag '--resize\\_outputs'.",
"### Side-by-Side Inference\n\n\nThe above inference script will save each aging result in a different sub-directory for each target age. Sometimes,\nhowever, it is more convenient to save all aging results of a given input side-by-side like the following:\n\n\n\n\n\n\n\nTo do so, we provide a script 'inference\\_side\\_by\\_side.py' that works in a similar manner as the regular inference script:\n\n\nHere, all aging results 0,10,...,80 will be save side-by-side with the original input image.",
"### Reference-Guided Inference\n\n\nIn the paper, we demonstrated how one can perform style-mixing on the fine-level style inputs with a reference image\nto control global features such as hair color. For example,\n\n\n\n\n\n\n\nTo perform style mixing using reference images, we provide the script 'reference\\_guided\\_inference.py'. Here,\nwe first perform aging using the specified target age(s). Then, style mixing is performed using the specified\nreference images and the specified layers. For example, one can run:\n\n\nHere, the reference images should be specified in the file defined by '--ref\\_images\\_paths\\_file' and should have the\nfollowing format:\n\n\nIn the above example, we will aging using 4 different target ages. For each target age, we first transform the\ntest samples defined by '--data\\_path' and then perform style mixing on layers 8,9 defined by '--latent\\_mask'.\nThe results of each target age are saved in its own sub-directory.",
"### Style Mixing\n\n\nInstead of performing style mixing using a reference image, you can perform style mixing using randomly generated\nw latent vectors by running the script 'style\\_mixing.py'. This script works in a similar manner to the reference\nguided inference except you do not need to specify the '--ref\\_images\\_paths\\_file' flag.\n\n\nRepository structure\n--------------------\n\n\n\nCredits\n-------\n\n\nStyleGAN2 model and implementation: \n\nURL \n\nCopyright (c) 2019 Kim Seonghyeon \n\nLicense (MIT) URL\n\n\nIR-SE50 model and implementations: \n\nURL \n\nCopyright (c) 2018 TreB1eN \n\nLicense (MIT) URL\n\n\nRanger optimizer implementation: \n\nURL \n\nLicense (Apache License 2.0) URL\n\n\nLPIPS model and implementation: \n\nURL \n\nCopyright (c) 2020, Sou Uchida \n\nLicense (BSD 2-Clause) URL\n\n\nDEX VGG model and implementation: \n\nURL \n\nCopyright (c) 2020, InterDigital R&D France \n\nURL\n\n\npSp model and implementation: \n\nURL \n\nCopyright (c) 2020 Elad Richardson, Yuval Alaluf \n\nURL\n\n\nAcknowledgments\n---------------\n\n\nThis code borrows heavily from pixel2style2pixel\n\n\nIf you use this code for your research, please cite our paper <a href=\"URL a Matter of Style: Age Transformation Using a Style-Based Regression Model:"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-dpo-full-sft-wo-medication_qa
This model is a fine-tuned version of [Minbyul/llama2-7b-wo-medication_qa-sft](https://huggingface.co/Minbyul/llama2-7b-wo-medication_qa-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4396
- Rewards/chosen: -0.1779
- Rewards/rejected: -1.2468
- Rewards/accuracies: 0.9500
- Rewards/margins: 1.0689
- Logps/rejected: -650.3414
- Logps/chosen: -477.8221
- Logits/rejected: -0.4720
- Logits/chosen: -0.4277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected |
|:-------------:|:-----:|:----:|:-------------:|:---------------:|:------------:|:--------------:|:---------------:|:------------------:|:--------------:|:---------------:|:----------------:|
| 0.2708 | 0.76 | 100 | -0.4292 | -0.4708 | -476.1255 | -635.9033 | 0.4682 | 0.9250 | -0.1609 | 0.9415 | -1.1024 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/llama2-7b-wo-medication_qa-sft", "model-index": [{"name": "llama2-7b-dpo-full-sft-wo-medication_qa", "results": []}]} | Minbyul/llama2-7b-dpo-full-sft-wo-medication_qa | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/llama2-7b-wo-medication_qa-sft",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:12:29+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/llama2-7b-wo-medication_qa-sft #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| llama2-7b-dpo-full-sft-wo-medication\_qa
========================================
This model is a fine-tuned version of Minbyul/llama2-7b-wo-medication\_qa-sft on the HuggingFaceH4/ultrafeedback\_binarized dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4396
* Rewards/chosen: -0.1779
* Rewards/rejected: -1.2468
* Rewards/accuracies: 0.9500
* Rewards/margins: 1.0689
* Logps/rejected: -650.3414
* Logps/chosen: -477.8221
* Logits/rejected: -0.4720
* Logits/chosen: -0.4277
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-07
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/llama2-7b-wo-medication_qa-sft #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/fwuvqk9 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:13:11+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["llama-factory"]} | anyasims/orpo2_capy2_1_BASE_sft1.0_zs1.0_ORb1.0-s2-0cf3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:19:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #llama-factory #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #llama-factory #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | amphora/fc-non-decompose | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:21:01+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | GamblerOnTrain/CAWD290 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:21:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/px0miu9 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:21:40+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/sp5tw11 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:21:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {} | DMaxDesign/test | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-04-29T12:21:52+00:00 | [
"1910.09700"
] | [] | TAGS
#arxiv-1910.09700 #region-us
|
# Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#arxiv-1910.09700 #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | null |
# CroissantLLM - All smaller checkpoints
These models are part of the CroissantLLM initiative, and correspond to the checkpoints after 100B tokens for smaller model sizes.
These are the models used for scaling laws.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
https://arxiv.org/abs/2402.00786
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and AntΓ³nio Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and JoΓ£o Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and FranΓ§ois Yvon and AndrΓ© F. T. Martins and Gautier Viaud and CΓ©line Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/CroissantLLMBase"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatiguΓ© que je pourrais m'endormir maintenant.\nHe is heading to the market. -> Il va au marchΓ©.\nWe are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.3)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
``` | {"language": ["fr", "en"], "license": "mit", "tags": ["legal", "code", "text-generation-inference", "art"], "datasets": ["cerebras/SlimPajama-627B", "uonlp/CulturaX", "pg19", "bigcode/starcoderdata", "croissantllm/croissant_dataset"], "pipeline_tag": "text-generation"} | croissantllm/croissant_small_models | null | [
"tensorboard",
"safetensors",
"legal",
"code",
"text-generation-inference",
"art",
"text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"dataset:croissantllm/croissant_dataset",
"arxiv:2402.00786",
"license:mit",
"region:us"
] | null | 2024-04-29T12:23:29+00:00 | [
"2402.00786"
] | [
"fr",
"en"
] | TAGS
#tensorboard #safetensors #legal #code #text-generation-inference #art #text-generation #fr #en #dataset-cerebras/SlimPajama-627B #dataset-uonlp/CulturaX #dataset-pg19 #dataset-bigcode/starcoderdata #dataset-croissantllm/croissant_dataset #arxiv-2402.00786 #license-mit #region-us
|
# CroissantLLM - All smaller checkpoints
These models are part of the CroissantLLM initiative, and correspond to the checkpoints after 100B tokens for smaller model sizes.
These are the models used for scaling laws.
To play with the final model, we recommend using the Chat version: URL
URL
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
Our work can be cited as:
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
| [
"# CroissantLLM - All smaller checkpoints\n\nThese models are part of the CroissantLLM initiative, and correspond to the checkpoints after 100B tokens for smaller model sizes.\nThese are the models used for scaling laws.\n\n\nTo play with the final model, we recommend using the Chat version: URL\n\n\nURL",
"## Abstract\nWe introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.\nTo that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.\nTo assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.\nThis work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.\n\nOur work can be cited as:",
"## Usage\n\nThis model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies."
] | [
"TAGS\n#tensorboard #safetensors #legal #code #text-generation-inference #art #text-generation #fr #en #dataset-cerebras/SlimPajama-627B #dataset-uonlp/CulturaX #dataset-pg19 #dataset-bigcode/starcoderdata #dataset-croissantllm/croissant_dataset #arxiv-2402.00786 #license-mit #region-us \n",
"# CroissantLLM - All smaller checkpoints\n\nThese models are part of the CroissantLLM initiative, and correspond to the checkpoints after 100B tokens for smaller model sizes.\nThese are the models used for scaling laws.\n\n\nTo play with the final model, we recommend using the Chat version: URL\n\n\nURL",
"## Abstract\nWe introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.\nTo that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.\nTo assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.\nThis work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.\n\nOur work can be cited as:",
"## Usage\n\nThis model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies."
] |
text-generation | transformers |
**Scarlett-Llama-3-8B-v1.0**
**Note**: This model is updated version of my earlier released model **[Scarlett-Llama-3-8B](https://huggingface.co/ajibawa-2023/Scarlett-Llama-3-8B)** . This model doesn't have repetation problem.
Kindly check new examples included at the bottom.
Scarlett is trained on various topics such as Philosophy, Advice, Jokes, Coding etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations.
Scarlett is far too good in generating human like conversation. Her ability to have longer & deeper conversation is terrific. Kindly check below given examples.
She will not be involved in any kind of adult/sexual role play.
This is Fully Finetuned Model. Quantize models will be available soon.
**Training:**
Entire dataset was trained on 4 x A100 80GB. Axolotl codebase was used for training purpose. For 3 epoch, training took more than 2 Hours. This was trained on Llama-3-8B by Meta.
**GGUF & Exllama**
GGUF: [Link](https://huggingface.co/bartowski/Scarlett-Llama-3-8B-v1.0-GGUF)
Exllama V2: [Link](https://huggingface.co/bartowski/Scarlett-Llama-3-8B-v1.0-exl2)
Special Thanks to [Bartowski](https://huggingface.co/bartowski) for quantizing this model.
**Example Prompt:**
This model uses **ChatML** prompt format.
```
<|im_start|>system
You are Scarlett, a Helpful Assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
You can modify above Prompt as per your requirement.
One example will be:
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
```
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Example Output**
Example 1

Example 2

Example 3

Example 4

| {"language": ["en"], "license": "other", "tags": ["art", "philosophy", "romance", "jokes", "advice", "code", "companionship"], "license_name": "llama3", "license_link": "LICENSE"} | ajibawa-2023/Scarlett-Llama-3-8B-v1.0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"art",
"philosophy",
"romance",
"jokes",
"advice",
"code",
"companionship",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:24:25+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #art #philosophy #romance #jokes #advice #code #companionship #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Scarlett-Llama-3-8B-v1.0
Note: This model is updated version of my earlier released model Scarlett-Llama-3-8B . This model doesn't have repetation problem.
Kindly check new examples included at the bottom.
Scarlett is trained on various topics such as Philosophy, Advice, Jokes, Coding etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations.
Scarlett is far too good in generating human like conversation. Her ability to have longer & deeper conversation is terrific. Kindly check below given examples.
She will not be involved in any kind of adult/sexual role play.
This is Fully Finetuned Model. Quantize models will be available soon.
Training:
Entire dataset was trained on 4 x A100 80GB. Axolotl codebase was used for training purpose. For 3 epoch, training took more than 2 Hours. This was trained on Llama-3-8B by Meta.
GGUF & Exllama
GGUF: Link
Exllama V2: Link
Special Thanks to Bartowski for quantizing this model.
Example Prompt:
This model uses ChatML prompt format.
You can modify above Prompt as per your requirement.
One example will be:
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
Example Output
Example 1
!image/jpeg
Example 2
!image/jpeg
Example 3
!image/jpeg
Example 4
!image/jpeg
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #art #philosophy #romance #jokes #advice #code #companionship #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billm_conll2003_mistralai-Mistral-7B-v0.1_ckpt
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 9.2812
- Precision: 0.0625
- Recall: 0.25
- F1: 0.1
- Accuracy: 0.2222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 2 | 9.8438 | 0.0606 | 0.25 | 0.0976 | 0.2 |
| No log | 2.0 | 4 | 9.2812 | 0.0625 | 0.25 | 0.1 | 0.2222 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.0+cu117
- Datasets 2.15.0
- Tokenizers 0.19.1
## Training procedure
### Framework versions
- PEFT 0.6.2
| {"library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "billm_conll2003_mistralai-Mistral-7B-v0.1_ckpt", "results": []}]} | ferrazzipietro/billm_conll2003_mistralai-Mistral-7B-v0.1_ckpt | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:conll2003",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-04-29T12:25:32+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #dataset-conll2003 #base_model-mistralai/Mistral-7B-v0.1 #region-us
| billm\_conll2003\_mistralai-Mistral-7B-v0.1\_ckpt
=================================================
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 9.2812
* Precision: 0.0625
* Recall: 0.25
* F1: 0.1
* Accuracy: 0.2222
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.0.0+cu117
* Datasets 2.15.0
* Tokenizers 0.19.1
Training procedure
------------------
### Framework versions
* PEFT 0.6.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.0+cu117\n* Datasets 2.15.0\n* Tokenizers 0.19.1\n\n\nTraining procedure\n------------------",
"### Framework versions\n\n\n* PEFT 0.6.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #dataset-conll2003 #base_model-mistralai/Mistral-7B-v0.1 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.0+cu117\n* Datasets 2.15.0\n* Tokenizers 0.19.1\n\n\nTraining procedure\n------------------",
"### Framework versions\n\n\n* PEFT 0.6.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | amphora/fc-both | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:26:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** shubham11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | shubham11/mistralreleas_eessayScoring | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:26:26+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: shubham11
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: shubham11\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: shubham11\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
feature-extraction | transformers | # medical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564
## Model Description
medical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/florianhoenicke/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "medical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| {} | florianhoenicke/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564 | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"custom_code",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:27:40+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #feature-extraction #custom_code #endpoints_compatible #region-us
| # medical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564
## Model Description
medical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found here.
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
| [
"# medical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564",
"## Model Description\n\nmedical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain.",
"## Use Case\nThis model is designed to support various applications in natural language processing and understanding.",
"## Associated Dataset\n\nThis the dataset for this model can be found here.",
"## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] | [
"TAGS\n#transformers #safetensors #bert #feature-extraction #custom_code #endpoints_compatible #region-us \n",
"# medical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564",
"## Model Description\n\nmedical-20-0-16-jinaai_jina-embeddings-v2-small-en-1000-gpt-4-turbo-01_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain.",
"## Use Case\nThis model is designed to support various applications in natural language processing and understanding.",
"## Associated Dataset\n\nThis the dataset for this model can be found here.",
"## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["llama-factory"]} | arml/llama3-8b-tuned | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:28:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #llama-factory #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #llama-factory #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
voice-activity-detection | pyannote | This is the model card of a pyannote model that has been pushed on the Hub. This model card has been automatically generated. | {"library_name": "pyannote", "tags": ["pyannote", "pyannote.audio", "pyannote-audio-model", "audio", "voice", "speech", "speaker", "speaker-diarization", "speaker-change-detection", "speaker-segmentation", "voice-activity-detection", "overlapped-speech-detection", "resegmentation"], "licence": "mit", "extra_gated_prompt": "The collected information will help acquire a better knowledge of pyannote.audio userbase and help its maintainers improve it further. Though\u00a0 this model uses MIT license and will always remain open-source, we will occasionnally email you about premium models and paid services around pyannote."} | kamilakesbi/segmentation_model_pyannote | null | [
"pyannote",
"pytorch",
"pyannote.audio",
"pyannote-audio-model",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"speaker-segmentation",
"voice-activity-detection",
"overlapped-speech-detection",
"resegmentation",
"region:us"
] | null | 2024-04-29T12:28:44+00:00 | [] | [] | TAGS
#pyannote #pytorch #pyannote.audio #pyannote-audio-model #audio #voice #speech #speaker #speaker-diarization #speaker-change-detection #speaker-segmentation #voice-activity-detection #overlapped-speech-detection #resegmentation #region-us
| This is the model card of a pyannote model that has been pushed on the Hub. This model card has been automatically generated. | [] | [
"TAGS\n#pyannote #pytorch #pyannote.audio #pyannote-audio-model #audio #voice #speech #speaker #speaker-diarization #speaker-change-detection #speaker-segmentation #voice-activity-detection #overlapped-speech-detection #resegmentation #region-us \n"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo shenzhi-wang/Llama3-8B-Chinese-Chat installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/shenzhi-wang-Llama3-8B-Chinese-Chat-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("shenzhi-wang/Llama3-8B-Chinese-Chat")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model shenzhi-wang/Llama3-8B-Chinese-Chat before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "shenzhi-wang/Llama3-8B-Chinese-Chat"} | PrunaAI/shenzhi-wang-Llama3-8B-Chinese-Chat-AWQ-4bit-smashed | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:shenzhi-wang/Llama3-8B-Chinese-Chat",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-29T12:29:16+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #pruna-ai #conversational #base_model-shenzhi-wang/Llama3-8B-Chinese-Chat #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo shenzhi-wang/Llama3-8B-Chinese-Chat installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model shenzhi-wang/Llama3-8B-Chinese-Chat before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo shenzhi-wang/Llama3-8B-Chinese-Chat installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model shenzhi-wang/Llama3-8B-Chinese-Chat before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #conversational #base_model-shenzhi-wang/Llama3-8B-Chinese-Chat #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo shenzhi-wang/Llama3-8B-Chinese-Chat installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model shenzhi-wang/Llama3-8B-Chinese-Chat before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | GamblerOnTrain/ABW990 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:32:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Luluuu/0428_SEASON_baseline_checkpoint_6472 | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:32:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bart #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | GamblerOnTrain/ABW991 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:32:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/1xfg24x | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:35:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | null |
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Meta-Llama-3-70B-Instruct-GGUF
## Original Model
[meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
## Run with LlamaEdge
- LlamaEdge version: [v0.8.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.8.3) and above
- Prompt template
- Prompt type: `llama-3-chat`
- Prompt string
```text
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
- Context size: `8192`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Meta-Llama-3-70B-Instruct-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template llama-3-chat \
--ctx-size 8192 \
--model-name Llama-3-70b
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Meta-Llama-3-70B-Instruct-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template llama-3-chat \
--ctx-size 8192
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Meta-Llama-3-70B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q2_K.gguf) | Q2_K | 2 | 26.4 GB| smallest, significant quality loss - not recommended for most purposes |
| [Meta-Llama-3-70B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 37.1 GB| small, substantial quality loss |
| [Meta-Llama-3-70B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 34.3 GB| very small, high quality loss |
| [Meta-Llama-3-70B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 30.9 GB| very small, high quality loss |
| [Meta-Llama-3-70B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 40 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Meta-Llama-3-70B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 42.5 GB| medium, balanced quality - recommended |
| [Meta-Llama-3-70B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 48.7 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Meta-Llama-3-70B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 50 GB| large, very low quality loss - recommended |
| [Meta-Llama-3-70B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 48.7 GB| large, low quality loss - recommended |
| [Meta-Llama-3-70B-Instruct-Q6_K-00001-of-00002.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q6_K-00001-of-00002.gguf) | Q6_K | 6 | 32.1 GB| very large, extremely low quality loss |
| [Meta-Llama-3-70B-Instruct-Q6_K-00002-of-00002.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q6_K-00002-of-00002.gguf) | Q6_K | 6 | 25.7 GB| very large, extremely low quality loss |
| [Meta-Llama-3-70B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 32 GB| very large, extremely low quality loss - not recommended |
| [Meta-Llama-3-70B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended |
| [Meta-Llama-3-70B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 10.9 GB| very large, extremely low quality loss - not recommended |
The f16 GGUF model for the original model can be found in [second-state/Meta-Llama-3-70B-Instruct-f16-GGUF](https://huggingface.co/second-state/Meta-Llama-3-70B-Instruct-f16-GGUF)
*Quantized with llama.cpp b2715.* | {"language": ["en"], "license": "other", "model_name": "Llama3 70B Instruct", "license_name": "llama3", "arxiv": 2307.09288, "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "inference": false, "model_creator": "Meta Llama3", "model_type": "llama", "pipeline_tag": "text-generation", "quantized_by": "Second State Inc."} | second-state/Meta-Llama-3-70B-Instruct-GGUF | null | [
"gguf",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-29T12:40:56+00:00 | [] | [
"en"
] | TAGS
#gguf #text-generation #en #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-other #region-us
|

---
Meta-Llama-3-70B-Instruct-GGUF
==============================
Original Model
--------------
meta-llama/Meta-Llama-3-70B-Instruct
Run with LlamaEdge
------------------
* LlamaEdge version: v0.8.3 and above
* Prompt template
+ Prompt type: 'llama-3-chat'
+ Prompt string
* Context size: '8192'
* Run as LlamaEdge service
* Run as LlamaEdge command app
Quantized GGUF Models
---------------------
The f16 GGUF model for the original model can be found in second-state/Meta-Llama-3-70B-Instruct-f16-GGUF
*Quantized with URL b2715.*
| [] | [
"TAGS\n#gguf #text-generation #en #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-other #region-us \n"
] |
null | null | RVC v2 Model ΠΠ°Π½Π½Π° Π€ΡΠΈΡΠΊΠ΅ (Jeanne Friske) Snowie v3 pretrain
Credit: @Koshatka_Lana on YouTube
Enjoy using it! Don't forget to leave credits! =)
An example for a 180 epoch model: https://cdn.discordapp.com/attachments/1193711231576584345/1234758449833967638/AstraLabs-1066386649359073300.mp3?ex=6631e5eb&is=6630946b&hm=97ea97408015c7948bf35416d2864ee72554037bb6d73d67d3842eafcd8834bd& | {"tags": ["music"]} | ToeBoe/JeanneFriske | null | [
"music",
"region:us"
] | null | 2024-04-29T12:41:43+00:00 | [] | [] | TAGS
#music #region-us
| RVC v2 Model ΠΠ°Π½Π½Π° Π€ΡΠΈΡΠΊΠ΅ (Jeanne Friske) Snowie v3 pretrain
Credit: @Koshatka_Lana on YouTube
Enjoy using it! Don't forget to leave credits! =)
An example for a 180 epoch model: URL | [] | [
"TAGS\n#music #region-us \n"
] |
text-generation | transformers | # tmp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.
### Models Merged
The following models were included in the merge:
* ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
* ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
dtype: bfloat16
merge_method: task_arithmetic
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 4]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.7279377399402179
- layer_range: [0, 4]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.15295380041554363
- layer_range: [0, 4]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: -0.08929832001917964
- sources:
- layer_range: [4, 8]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.691881657249384
- layer_range: [4, 8]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.2922325727237859
- layer_range: [4, 8]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.5080572203176679
- sources:
- layer_range: [8, 12]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.09187783621015794
- layer_range: [8, 12]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: -0.012485482975296447
- layer_range: [8, 12]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.41795960652363595
- sources:
- layer_range: [12, 16]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.34681087119307275
- layer_range: [12, 16]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: -0.06403292076991726
- layer_range: [12, 16]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.09721311625574781
- sources:
- layer_range: [16, 20]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.5121357281800163
- layer_range: [16, 20]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.6220102021390902
- layer_range: [16, 20]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.18620926164035395
- sources:
- layer_range: [20, 24]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.41782286184995043
- layer_range: [20, 24]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.4421406594473506
- layer_range: [20, 24]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.17389465072652804
- sources:
- layer_range: [24, 28]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.49147162824520074
- layer_range: [24, 28]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.33754092637416533
- layer_range: [24, 28]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.44509618118199307
- sources:
- layer_range: [28, 32]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.35777289734770956
- layer_range: [28, 32]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.18435978508773565
- layer_range: [28, 32]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.3646502716264272
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []} | yuiseki/YuisekinAIEvol-Mistral-7B-ja-math-v0.1.1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:42:26+00:00 | [
"2212.04089"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2212.04089 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # tmp
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the task arithmetic merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.
### Models Merged
The following models were included in the merge:
* ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
* ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
### Configuration
The following YAML configuration was used to produce this model:
| [
"# tmp\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task arithmetic merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330\n* ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2212.04089 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# tmp\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task arithmetic merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330\n* ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | {"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | usr-bin-ksh/autotrain-1vcxw-ff5zv | null | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:42:52+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
# Usage
| [
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] | [
"TAGS\n#transformers #tensorboard #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/c6vjqra | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:43:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | null |
# Llama-3-8b-64k-PoSE-GGUf
- This is quantized version of [winglian/Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE) created using llama.cpp
## Model Description
This model uses [PoSE](https://huggingface.co/papers/2309.10400) to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0.
We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens.
We have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k.
This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. [WandB](https://wandb.ai/oaaic/llama-3-64k/runs/tkcyjt37)
## Llama 3 8b
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes β 8B and 70B parameters β in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaβs sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weβve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Metaβs cybersecurity safety eval suite, measuring Llama 3βs propensity to suggest insecure code when used as a coding assistant, and Llama 3βs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the modelβs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3βs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
| {"language": ["en"], "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "axolotl"], "pipeline_tag": "text-generation", "base_model": "winglian/Llama-3-8b-64k-PoSE"} | QuantFactory/Llama-3-8b-64k-PoSE-GGUF | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"axolotl",
"text-generation",
"en",
"arxiv:2309.10400",
"base_model:winglian/Llama-3-8b-64k-PoSE",
"region:us"
] | null | 2024-04-29T12:43:41+00:00 | [
"2309.10400"
] | [
"en"
] | TAGS
#gguf #facebook #meta #pytorch #llama #llama-3 #axolotl #text-generation #en #arxiv-2309.10400 #base_model-winglian/Llama-3-8b-64k-PoSE #region-us
| Llama-3-8b-64k-PoSE-GGUf
========================
* This is quantized version of winglian/Llama-3-8b-64k-PoSE created using URL
Model Description
-----------------
This model uses PoSE to extend Llama's context length from 8k to 64k @ rope\_theta: 500000.0.
We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens.
We have further set rope\_theta to 2M after continued pre-training to potentially further extend the context past 64k.
This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. WandB
Llama 3 8b
----------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes β 8B and 70B parameters β in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaβs sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weβve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Metaβs cybersecurity safety eval suite, measuring Llama 3βs propensity to suggest insecure code when used as a coding assistant, and Llama 3βs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the modelβs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3βs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
| [
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weβve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaβs cybersecurity safety eval suite, measuring Llama 3βs propensity to suggest insecure code when used as a coding assistant, and Llama 3βs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelβs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3βs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL"
] | [
"TAGS\n#gguf #facebook #meta #pytorch #llama #llama-3 #axolotl #text-generation #en #arxiv-2309.10400 #base_model-winglian/Llama-3-8b-64k-PoSE #region-us \n",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weβve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaβs cybersecurity safety eval suite, measuring Llama 3βs propensity to suggest insecure code when used as a coding assistant, and Llama 3βs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelβs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3βs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL"
] |
null | null | # Table of Contents
* [FunSearch](#FunSearch)
* [FunSearch](#FunSearch.FunSearch)
* [make\_request\_for\_prompt](#FunSearch.FunSearch.make_request_for_prompt)
* [request\_samplers](#FunSearch.FunSearch.request_samplers)
* [get\_next\_state](#FunSearch.FunSearch.get_next_state)
* [set\_up\_flow\_state](#FunSearch.FunSearch.set_up_flow_state)
* [save\_message\_to\_state](#FunSearch.FunSearch.save_message_to_state)
* [rename\_key\_message\_in\_state](#FunSearch.FunSearch.rename_key_message_in_state)
* [message\_in\_state](#FunSearch.FunSearch.message_in_state)
* [get\_message\_from\_state](#FunSearch.FunSearch.get_message_from_state)
* [pop\_message\_from\_state](#FunSearch.FunSearch.pop_message_from_state)
* [merge\_message\_request\_state](#FunSearch.FunSearch.merge_message_request_state)
* [register\_data\_to\_state](#FunSearch.FunSearch.register_data_to_state)
* [call\_program\_db](#FunSearch.FunSearch.call_program_db)
* [call\_evaluator](#FunSearch.FunSearch.call_evaluator)
* [call\_sampler](#FunSearch.FunSearch.call_sampler)
* [generate\_reply](#FunSearch.FunSearch.generate_reply)
* [run](#FunSearch.FunSearch.run)
* [ProgramDBFlowModule](#ProgramDBFlowModule)
* [ProgramDBFlowModule.ProgramDBFlow](#ProgramDBFlowModule.ProgramDBFlow)
* [ProgramDBFlow](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow)
* [set\_up\_flow\_state](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.set_up_flow_state)
* [get\_prompt](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_prompt)
* [reset\_islands](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.reset_islands)
* [register\_program](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.register_program)
* [get\_best\_programs](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_best_programs)
* [run](#ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.run)
* [SamplerFlowModule](#SamplerFlowModule)
* [SamplerFlowModule.SamplerFlow](#SamplerFlowModule.SamplerFlow)
* [SamplerFlow](#SamplerFlowModule.SamplerFlow.SamplerFlow)
* [run](#SamplerFlowModule.SamplerFlow.SamplerFlow.run)
* [EvaluatorFlowModule](#EvaluatorFlowModule)
* [EvaluatorFlowModule.EvaluatorFlow](#EvaluatorFlowModule.EvaluatorFlow)
* [EvaluatorFlow](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow)
* [load\_functions](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.load_functions)
* [run\_function\_with\_timeout](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run_function_with_timeout)
* [evaluate\_program](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.evaluate_program)
* [analyse](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.analyse)
* [run](#EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run)
<a id="FunSearch"></a>
# FunSearch
<a id="FunSearch.FunSearch"></a>
## FunSearch Objects
```python
class FunSearch(CompositeFlow)
```
This class implements FunSearch. This code is an implementation of Funsearch (https://www.nature.com/articles/s41586-023-06924-6) and is heavily inspired by the original code (https://github.com/google-deepmind/funsearch) . It's a Flow in charge of starting, stopping and managing (passing around messages) the FunSearch process. It passes messages around to the following subflows:
- ProgramDBFlow: which is in charge of storing and retrieving programs.
- SamplerFlow: which is in charge of sampling programs.
- EvaluatorFlow: which is in charge of evaluating programs.
*Configuration Parameters*:
- `name` (str): The name of the flow. Default: "FunSearchFlow".
- `description` (str): The description of the flow. Default: "A flow implementing FunSearch"
- `subflows_config` (Dict[str,Any]): A dictionary of subflows configurations. Default:
- `ProgramDBFlow`: By default, it uses the `ProgramDBFlow` class and uses its default parameters.
- `SamplerFlow`: By default, it uses the `SamplerFlow` class and uses its default parameters.
- `EvaluatorFlow`: By default, it uses the `EvaluatorFlow` class and uses its default parameters.
**Input Interface**:
- `from` (str): The flow from which the message is coming from. It can be one of the following: "FunSearch", "SamplerFlow", "EvaluatorFlow", "ProgramDBFlow".
- `operation` (str): The operation to perform. It can be one of the following: "start", "stop", "get_prompt", "get_best_programs_per_island", "register_program".
- `content` (Dict[str,Any]): The content associated to an operation. Here is the expected content for each operation:
- "start":
- `num_samplers` (int): The number of samplers to start up. Note that it's still restricted by the number of workers available. Default: 1.
- "stop":
- No content. Pass either an empty dictionary or None. Works also with no content.
- "get_prompt":
- No content. Pass either an empty dictionary or None. Works also with no content.
- "get_best_programs_per_island":
- No content. Pass either an empty dictionary or None. Works also with no content.
**Output Interface**:
- `retrieved` (Dict[str,Any]): The retrieved data.
**Citation**:
@Article{FunSearch2023,
author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
journal = {Nature},
title = {Mathematical discoveries from program search with large language models},
year = {2023},
doi = {10.1038/s41586-023-06924-6}
}
<a id="FunSearch.FunSearch.make_request_for_prompt"></a>
#### make\_request\_for\_prompt
```python
def make_request_for_prompt()
```
This method makes a request for a prompt. It sends a message to itself with the operation "get_prompt" which will trigger the flow to call the `ProgramDBFlow` to get a prompt.
<a id="FunSearch.FunSearch.request_samplers"></a>
#### request\_samplers
```python
def request_samplers(input_message: FlowMessage)
```
This method requests samplers. It sends a message to itself with the operation "get_prompt" which will trigger the flow to call the `ProgramDBFlow` to get a prompt.
**Arguments**:
- `input_message` (`FlowMessage`): The input message that triggered the request for samplers.
<a id="FunSearch.FunSearch.get_next_state"></a>
#### get\_next\_state
```python
def get_next_state(input_message: FlowMessage)
```
This method determines the next state of the flow based on the input message. It will return the next state based on the current state and the message received.
**Arguments**:
- `input_message` (`FlowMessage`): The input message that triggered the request for the next state.
**Returns**:
`str`: The next state of the flow.
<a id="FunSearch.FunSearch.set_up_flow_state"></a>
#### set\_up\_flow\_state
```python
def set_up_flow_state()
```
This method sets up the state of the flow. It's called at the beginning of the flow.
<a id="FunSearch.FunSearch.save_message_to_state"></a>
#### save\_message\_to\_state
```python
def save_message_to_state(msg_id: str, message: FlowMessage)
```
This method saves a message to the state of the flow. It's used to keep track of state on a per message basis (i.e., state of the flow depending on the message received and id).
**Arguments**:
- `msg_id` (`str`): The id of the message to save.
- `message` (`FlowMessage`): The message to save.
<a id="FunSearch.FunSearch.rename_key_message_in_state"></a>
#### rename\_key\_message\_in\_state
```python
def rename_key_message_in_state(old_key: str, new_key: str)
```
This method renames a key in the state of the flow in the "msg_requests" dictonary. It's used to rename a key in the state of the flow (i.e., rename a message id).
**Arguments**:
- `old_key` (`str`): The old key to rename.
- `new_key` (`str`): The new key to rename to.
<a id="FunSearch.FunSearch.message_in_state"></a>
#### message\_in\_state
```python
def message_in_state(msg_id: str) -> bool
```
This method checks if a message is in the state of the flow (in "msg_requests" dictionary). It returns True if the message is in the state, otherwise it returns False.
**Arguments**:
- `msg_id` (`str`): The id of the message to check if it's in the state.
**Returns**:
`bool`: True if the message is in the state, otherwise False.
<a id="FunSearch.FunSearch.get_message_from_state"></a>
#### get\_message\_from\_state
```python
def get_message_from_state(msg_id: str) -> Dict[str, Any]
```
This method returns the state associated with a message id in the state of the flow (in "msg_requests" dictionary).
**Arguments**:
- `msg_id` (`str`): The id of the message to get the state from.
**Returns**:
`Dict[str,Any]`: The state associated with the message id.
<a id="FunSearch.FunSearch.pop_message_from_state"></a>
#### pop\_message\_from\_state
```python
def pop_message_from_state(msg_id: str) -> Dict[str, Any]
```
This method pops a message from the state of the flow (in "msg_requests" dictionary). It the state associate to a message and removes it from the state.
**Arguments**:
- `msg_id` (`str`): The id of the message to pop from the state.
**Returns**:
`Dict[str,Any]`: The state associated with the message id.
<a id="FunSearch.FunSearch.merge_message_request_state"></a>
#### merge\_message\_request\_state
```python
def merge_message_request_state(id: str, new_states: Dict[str, Any])
```
This method merges new states to a message in the state of the flow (in "msg_requests" dictionary). It merges new states to a message in the state.
**Arguments**:
- `id` (`str`): The id of the message to merge new states to.
- `new_states` (`Dict[str,Any]`): The new states to merge to the message.
<a id="FunSearch.FunSearch.register_data_to_state"></a>
#### register\_data\_to\_state
```python
def register_data_to_state(input_message: FlowMessage)
```
This method registers the input message data to the flow state. It's called everytime a new input message is received.
**Arguments**:
- `input_message` (`FlowMessage`): The input message
<a id="FunSearch.FunSearch.call_program_db"></a>
#### call\_program\_db
```python
def call_program_db(input_message)
```
This method calls the ProgramDBFlow. It sends a message to the ProgramDBFlow with the data of the input message.
**Arguments**:
- `input_message` (`FlowMessage`): The input message to send to the ProgramDBFlow.
<a id="FunSearch.FunSearch.call_evaluator"></a>
#### call\_evaluator
```python
def call_evaluator(input_message)
```
This method calls the EvaluatorFlow. It sends a message to the EvaluatorFlow with the data of the input message.
**Arguments**:
- `input_message` (`FlowMessage`): The input message to send to the EvaluatorFlow.
<a id="FunSearch.FunSearch.call_sampler"></a>
#### call\_sampler
```python
def call_sampler(input_message)
```
This method calls the SamplerFlow. It sends a message to the SamplerFlow with the data of the input message.
**Arguments**:
- `input_message` (`FlowMessage`): The input message to send to the SamplerFlow.
<a id="FunSearch.FunSearch.generate_reply"></a>
#### generate\_reply
```python
def generate_reply(input_message: FlowMessage)
```
This method generates a reply to a message sent to user. It packages the output message and sends it.
**Arguments**:
- `input_message` (`FlowMessage`): The input message to generate a reply to.
<a id="FunSearch.FunSearch.run"></a>
#### run
```python
def run(input_message: FlowMessage)
```
This method runs the flow. It's the main method of the flow. It's called when the flow is executed.
<a id="ProgramDBFlowModule"></a>
# ProgramDBFlowModule
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow"></a>
## ProgramDBFlow Objects
```python
class ProgramDBFlow(AtomicFlow)
```
This class implements a ProgramDBFlow. It's a flow that stores programs and their scores in a database. It can also query the database for the best programs or generate a prompt containing stored programs in order to evolve them with a SamplerFlow. This code is an implementation of Funsearch (https://www.nature.com/articles/s41586-023-06924-6) and is heavily inspired by the original code (https://github.com/google-deepmind/funsearch)
**Configuration Parameters**:
- `name` (str): The name of the flow. Default: "ProgramDBFlow"
- `description` (str): A description of the flow. This description is used to generate the help message of the flow. Default: " A flow that saves programs in a database of islands"
- `artifact_to_evolve_name` (str): The name of the artifact/program to evolve. Default: "solve_function"
- `evaluate_function` (str): The function used to evaluate the program. No Default value. This MUST be passed as a parameter.
- `evaluate_file_full_content` (str): The full content of the file containing the evaluation function. No Default value. This MUST be passed as a parameter.
- `num_islands`: The number of islands to use. Default: 3
- `reset_period`: The period in seconds to reset the islands. Default: 3600
- `artifacts_per_prompt`: The number of previous artifacts/programs to include in a prompt. Default: 2
- `temperature`: The temperature of the island. Default: 0.1
- `temperature_period`: The period in seconds to change the temperature. Default: 30000
- `sample_with_replacement`: Whether to sample with replacement. Default: False
- `portion_of_islands_to_reset`: The portion of islands to reset. Default: 0.5
- `template` (dict): The template to use for a program. Default: {"preface": ""}
**Input Interface**:
- `operation` (str): The operation to perform. It can be one of the following: ["register_program","get_prompt","get_best_programs_per_island"]
**Output Interface**:
- `retrieved` (Any): The retrieved data. It can be one of the following:
- If the operation is "get_prompt", it can be a dictionary with the following keys
- `code` (str): The code of the prompt
- `version_generated` (int): The version of the prompt generated
- `island_id` (int): The id of the island that generated the prompt
- `header` (str): The header of the prompt
- If the operation is "register_program", it can be a string with the message "Program registered" or "Program failed to register"
- If the operation is "get_best_programs_per_island", it can be a dictionary with the following keys:
- `best_island_programs` (List[Dict[str,Any]]): A list of dictionaries with the following keys:
- `rank` (int): The rank of the program (1 is the best)
- `score` (float): The score of the program
- `program` (str): The program
- `island_id` (int): The id of the island that generated the program
**Citation**:
@Article{FunSearch2023,
author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
journal = {Nature},
title = {Mathematical discoveries from program search with large language models},
year = {2023},
doi = {10.1038/s41586-023-06924-6}
}
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.set_up_flow_state"></a>
#### set\_up\_flow\_state
```python
def set_up_flow_state()
```
This method sets up the state of the flow and clears the previous messages.
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_prompt"></a>
#### get\_prompt
```python
def get_prompt()
```
This method gets a prompt from an island. It returns the code, the version generated and the island id.
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.reset_islands"></a>
#### reset\_islands
```python
def reset_islands()
```
This method resets the islands. It resets the worst islands and copies the best programs to the worst islands.
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.register_program"></a>
#### register\_program
```python
def register_program(program: AbstractArtifact, island_id: int,
scores_per_test: ScoresPerTest)
```
This method registers a program in an island. It also updates the best program if needed.
**Arguments**:
- `program` (`AbstractArtifact`): The program to register
- `island_id` (`int`): The id of the island to register the program
- `scores_per_test` (`ScoresPerTest`): The scores per test of the program
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_best_programs"></a>
#### get\_best\_programs
```python
def get_best_programs() -> List[Dict[str, Any]]
```
This method returns the best programs per island.
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.run"></a>
#### run
```python
def run(input_message: FlowMessage)
```
This method runs the flow. It performs the operation requested in the input message.
<a id="SamplerFlowModule"></a>
# SamplerFlowModule
<a id="SamplerFlowModule.SamplerFlow"></a>
# SamplerFlowModule.SamplerFlow
<a id="SamplerFlowModule.SamplerFlow.SamplerFlow"></a>
## SamplerFlow Objects
```python
class SamplerFlow(ChatAtomicFlow)
```
This class implements a SamplerFlow. It is a flow that queries a LLM to generate a response to a given input. This class is a child of ChatAtomicFlow.
and expects the same parameters as ChatAtomicFlow (see https://huggingface.co/aiflows/ChatFlowModule).
**Configuration Parameters**:
- `name` (str): The name of the flow. Default: "SamplerFlowModule"
- `description` (str): A description of the flow. Default: "A flow that queries an LLM model to generate prompts for the Sampler flow"
- `backend` Dict[str,Any]: The backend of the flow. Used to call models via an API.
See litellm's supported models and APIs here: https://docs.litellm.ai/docs/providers.
The default parameters of the backend are all defined at aiflows.backends.llm_lite.LiteLLMBackend
(also see the defaul parameters of litellm's completion parameters: https://docs.litellm.ai/docs/completion/input#input-params-1).
Except for the following parameters who are overwritten by the ChatAtomicFlow in ChatAtomicFlow.yaml:
- `model_name` (Union[Dict[str,str],str]): The name of the model to use. Default: "gpt-4"
When using multiple API providers, the model_name can be a dictionary of the form
{"provider_name": "model_name"}. E.g. {"openai": "gpt-3.5-turbo", "azure": "azure/gpt-3.5-turbo"}
Default: "gpt-3.5-turbo" (the name needs to follow the name of the model in litellm https://docs.litellm.ai/docs/providers).
- `n` (int) : The number of answers to generate. Default: 1
- `max_tokens` (int): The maximum number of tokens to generate. Default: 2000
- `temperature` float: The temperature of the generation. Default: 1.0
- `top_p` float: An alternative to sampling with temperature. It instructs the model to consider the results of
the tokens with top_p probability. Default: 0.4
- `frequency_penalty` (number): It is used to penalize new tokens based on their frequency in the text so far. Default: 0.0
- `presence_penalty` (number): It is used to penalize new tokens based on their existence in the text so far. Default: 0.0
- `stream` (bool): Whether to stream the response or not. Default: false
- `system_message_prompt_template` (Dict[str,Any]): The template of the system message. It is used to generate the system message. Default: See SamplerFlow.yaml for default.
- `init_human_message_prompt_template` (Dict[str,Any]): The prompt template of the human/user message used to initialize the conversation
(first time in). It is used to generate the human message. It's passed as the user message to the LLM. Default: See SamplerFlow.yaml for default.
- `human_message_prompt_template` (Dict[str,Any]): The prompt template of the human/user message (message used everytime the except the first time in).
It's passed as the user message to the LLM. Default: See SamplerFlow.yaml for default.
- `previous_messages` (Dict[str,Any]): Defines which previous messages to include in the input of the LLM. Note that if `first_k`and `last_k` are both none,
all the messages of the flows's history are added to the input of the LLM. Default:
- `first_k` (int): If defined, adds the first_k earliest messages of the flow's chat history to the input of the LLM. Default: 1
- `last_k` (int): If defined, adds the last_k latest messages of the flow's chat history to the input of the LLM. Default: 1
*Input Interface Initialized (Expected input the first time in flow)*:
- `header` (str): A header message to include in prompt
- `code` (str): The "example" samples to generate our new sample from.
*Input Interface (Expected input the after the first time in flow)*:
- `header` (str): A header message to include in prompt
- `code` (str): The "example" samples to generate our new sample from.
*Output Interface*:
- `api_output` (str): The output of the API call. It is the response of the LLM to the input.
- `from` (str): The name of the flow that generated the output. It's always "SamplerFlow"
**Citation**:
@Article{FunSearch2023,
author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
journal = {Nature},
title = {Mathematical discoveries from program search with large language models},
year = {2023},
doi = {10.1038/s41586-023-06924-6}
}
<a id="SamplerFlowModule.SamplerFlow.SamplerFlow.run"></a>
#### run
```python
def run(input_message)
```
This method calls the backend of the flow (so queries the LLM). It calls the backend with the previous messages of the flow.
**Returns**:
`Any`: The output of the backend.
<a id="EvaluatorFlowModule"></a>
# EvaluatorFlowModule
<a id="EvaluatorFlowModule.EvaluatorFlow"></a>
# EvaluatorFlowModule.EvaluatorFlow
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow"></a>
## EvaluatorFlow Objects
```python
class EvaluatorFlow(AtomicFlow)
```
This class implements an EvaluatorFlow. It is a flow that evaluates a program (python code) using a given evaluator function. This code is an implementation of Funsearch (https://www.nature.com/articles/s41586-023-06924-6) and is heavily inspired by the original code (https://github.com/google-deepmind/funsearch)
**Configuration Parameters**:
- `name` (str): The name of the flow. Default: "EvaluatorFlow"
- `description` (str): A description of the flow. This description is used to generate the help message of the flow. Default: "A flow that evaluates code on tests"
- `py_file` (str): The python code containing the evaluation function. No default value. This MUST be passed as a parameter.
- `function_to_run_name` (str): The name of the function to run (the evaluation function) in the evaluator file. No default value. This MUST be passed as a parameter.
- `test_inputs` (Dict[str,Any]): A dictionary of test inputs to evaluate the program. Default: {"test1": None, "test2": None}
- `timeout_seconds` (int): The maximum number of seconds to run the evaluation function before returning an error. Default: 10
- `run_error_score` (int): The score to return if the evaluation function fails to run. Default: -100
- `use_test_input_as_key` (bool): Whether to use the test input parameters as the key in the output dictionary. Default: False
**Input Interface**:
- `artifact` (str): The program/artifact to evaluate.
**Output Interface**:
- `scores_per_test` (Dict[str, Dict[str, Any]]): A dictionary of scores per test input.
**Citation**:
@Article{FunSearch2023,
author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
journal = {Nature},
title = {Mathematical discoveries from program search with large language models},
year = {2023},
doi = {10.1038/s41586-023-06924-6}
}
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.load_functions"></a>
#### load\_functions
```python
def load_functions()
```
Load the functions from the evaluator py file with ast parsing
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run_function_with_timeout"></a>
#### run\_function\_with\_timeout
```python
def run_function_with_timeout(program: str, **kwargs)
```
Run the evaluation function with a timeout
**Arguments**:
- `program` (`str`): The program to evaluate
- `kwargs` (`Dict[str, Any]`): The keyword arguments to pass to the evaluation function
**Returns**:
`Tuple[bool, Any]`: A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.evaluate_program"></a>
#### evaluate\_program
```python
def evaluate_program(program: str, **kwargs)
```
Evaluate the program using the evaluation function
**Arguments**:
- `program` (`str`): The program to evaluate
- `kwargs` (`Dict[str, Any]`): The keyword arguments to pass to the evaluation function
**Returns**:
`Tuple[bool, Any]`: A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.analyse"></a>
#### analyse
```python
def analyse(program: str)
```
Analyse the program on the test inputs
**Arguments**:
- `program` (`str`): The program to evaluate
**Returns**:
`Dict[str, Dict[str, Any]]`: A dictionary of scores per test input
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run"></a>
#### run
```python
def run(input_message: FlowMessage)
```
This method runs the flow. It's the main method of the flow.
**Arguments**:
- `input_message` (`FlowMessage`): The input message
| {"license": "mit"} | aiflows/FunSearchFlowModule | null | [
"license:mit",
"region:us"
] | null | 2024-04-29T12:44:20+00:00 | [] | [] | TAGS
#license-mit #region-us
| # Table of Contents
* FunSearch
* FunSearch
* make\_request\_for\_prompt
* request\_samplers
* get\_next\_state
* set\_up\_flow\_state
* save\_message\_to\_state
* rename\_key\_message\_in\_state
* message\_in\_state
* get\_message\_from\_state
* pop\_message\_from\_state
* merge\_message\_request\_state
* register\_data\_to\_state
* call\_program\_db
* call\_evaluator
* call\_sampler
* generate\_reply
* run
* ProgramDBFlowModule
* ProgramDBFlowModule.ProgramDBFlow
* ProgramDBFlow
* set\_up\_flow\_state
* get\_prompt
* reset\_islands
* register\_program
* get\_best\_programs
* run
* SamplerFlowModule
* SamplerFlowModule.SamplerFlow
* SamplerFlow
* run
* EvaluatorFlowModule
* EvaluatorFlowModule.EvaluatorFlow
* EvaluatorFlow
* load\_functions
* run\_function\_with\_timeout
* evaluate\_program
* analyse
* run
<a id="FunSearch"></a>
# FunSearch
<a id="FunSearch.FunSearch"></a>
## FunSearch Objects
This class implements FunSearch. This code is an implementation of Funsearch (URL and is heavily inspired by the original code (URL . It's a Flow in charge of starting, stopping and managing (passing around messages) the FunSearch process. It passes messages around to the following subflows:
- ProgramDBFlow: which is in charge of storing and retrieving programs.
- SamplerFlow: which is in charge of sampling programs.
- EvaluatorFlow: which is in charge of evaluating programs.
*Configuration Parameters*:
- 'name' (str): The name of the flow. Default: "FunSearchFlow".
- 'description' (str): The description of the flow. Default: "A flow implementing FunSearch"
- 'subflows_config' (Dict[str,Any]): A dictionary of subflows configurations. Default:
- 'ProgramDBFlow': By default, it uses the 'ProgramDBFlow' class and uses its default parameters.
- 'SamplerFlow': By default, it uses the 'SamplerFlow' class and uses its default parameters.
- 'EvaluatorFlow': By default, it uses the 'EvaluatorFlow' class and uses its default parameters.
Input Interface:
- 'from' (str): The flow from which the message is coming from. It can be one of the following: "FunSearch", "SamplerFlow", "EvaluatorFlow", "ProgramDBFlow".
- 'operation' (str): The operation to perform. It can be one of the following: "start", "stop", "get_prompt", "get_best_programs_per_island", "register_program".
- 'content' (Dict[str,Any]): The content associated to an operation. Here is the expected content for each operation:
- "start":
- 'num_samplers' (int): The number of samplers to start up. Note that it's still restricted by the number of workers available. Default: 1.
- "stop":
- No content. Pass either an empty dictionary or None. Works also with no content.
- "get_prompt":
- No content. Pass either an empty dictionary or None. Works also with no content.
- "get_best_programs_per_island":
- No content. Pass either an empty dictionary or None. Works also with no content.
Output Interface:
- 'retrieved' (Dict[str,Any]): The retrieved data.
Citation:
@Article{FunSearch2023,
author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
journal = {Nature},
title = {Mathematical discoveries from program search with large language models},
year = {2023},
doi = {10.1038/s41586-023-06924-6}
}
<a id="FunSearch.FunSearch.make_request_for_prompt"></a>
#### make\_request\_for\_prompt
This method makes a request for a prompt. It sends a message to itself with the operation "get_prompt" which will trigger the flow to call the 'ProgramDBFlow' to get a prompt.
<a id="FunSearch.FunSearch.request_samplers"></a>
#### request\_samplers
This method requests samplers. It sends a message to itself with the operation "get_prompt" which will trigger the flow to call the 'ProgramDBFlow' to get a prompt.
Arguments:
- 'input_message' ('FlowMessage'): The input message that triggered the request for samplers.
<a id="FunSearch.FunSearch.get_next_state"></a>
#### get\_next\_state
This method determines the next state of the flow based on the input message. It will return the next state based on the current state and the message received.
Arguments:
- 'input_message' ('FlowMessage'): The input message that triggered the request for the next state.
Returns:
'str': The next state of the flow.
<a id="FunSearch.FunSearch.set_up_flow_state"></a>
#### set\_up\_flow\_state
This method sets up the state of the flow. It's called at the beginning of the flow.
<a id="FunSearch.FunSearch.save_message_to_state"></a>
#### save\_message\_to\_state
This method saves a message to the state of the flow. It's used to keep track of state on a per message basis (i.e., state of the flow depending on the message received and id).
Arguments:
- 'msg_id' ('str'): The id of the message to save.
- 'message' ('FlowMessage'): The message to save.
<a id="FunSearch.FunSearch.rename_key_message_in_state"></a>
#### rename\_key\_message\_in\_state
This method renames a key in the state of the flow in the "msg_requests" dictonary. It's used to rename a key in the state of the flow (i.e., rename a message id).
Arguments:
- 'old_key' ('str'): The old key to rename.
- 'new_key' ('str'): The new key to rename to.
<a id="FunSearch.FunSearch.message_in_state"></a>
#### message\_in\_state
This method checks if a message is in the state of the flow (in "msg_requests" dictionary). It returns True if the message is in the state, otherwise it returns False.
Arguments:
- 'msg_id' ('str'): The id of the message to check if it's in the state.
Returns:
'bool': True if the message is in the state, otherwise False.
<a id="FunSearch.FunSearch.get_message_from_state"></a>
#### get\_message\_from\_state
This method returns the state associated with a message id in the state of the flow (in "msg_requests" dictionary).
Arguments:
- 'msg_id' ('str'): The id of the message to get the state from.
Returns:
'Dict[str,Any]': The state associated with the message id.
<a id="FunSearch.FunSearch.pop_message_from_state"></a>
#### pop\_message\_from\_state
This method pops a message from the state of the flow (in "msg_requests" dictionary). It the state associate to a message and removes it from the state.
Arguments:
- 'msg_id' ('str'): The id of the message to pop from the state.
Returns:
'Dict[str,Any]': The state associated with the message id.
<a id="FunSearch.FunSearch.merge_message_request_state"></a>
#### merge\_message\_request\_state
This method merges new states to a message in the state of the flow (in "msg_requests" dictionary). It merges new states to a message in the state.
Arguments:
- 'id' ('str'): The id of the message to merge new states to.
- 'new_states' ('Dict[str,Any]'): The new states to merge to the message.
<a id="FunSearch.FunSearch.register_data_to_state"></a>
#### register\_data\_to\_state
This method registers the input message data to the flow state. It's called everytime a new input message is received.
Arguments:
- 'input_message' ('FlowMessage'): The input message
<a id="FunSearch.FunSearch.call_program_db"></a>
#### call\_program\_db
This method calls the ProgramDBFlow. It sends a message to the ProgramDBFlow with the data of the input message.
Arguments:
- 'input_message' ('FlowMessage'): The input message to send to the ProgramDBFlow.
<a id="FunSearch.FunSearch.call_evaluator"></a>
#### call\_evaluator
This method calls the EvaluatorFlow. It sends a message to the EvaluatorFlow with the data of the input message.
Arguments:
- 'input_message' ('FlowMessage'): The input message to send to the EvaluatorFlow.
<a id="FunSearch.FunSearch.call_sampler"></a>
#### call\_sampler
This method calls the SamplerFlow. It sends a message to the SamplerFlow with the data of the input message.
Arguments:
- 'input_message' ('FlowMessage'): The input message to send to the SamplerFlow.
<a id="FunSearch.FunSearch.generate_reply"></a>
#### generate\_reply
This method generates a reply to a message sent to user. It packages the output message and sends it.
Arguments:
- 'input_message' ('FlowMessage'): The input message to generate a reply to.
<a id="URL"></a>
#### run
This method runs the flow. It's the main method of the flow. It's called when the flow is executed.
<a id="ProgramDBFlowModule"></a>
# ProgramDBFlowModule
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow"></a>
## ProgramDBFlow Objects
This class implements a ProgramDBFlow. It's a flow that stores programs and their scores in a database. It can also query the database for the best programs or generate a prompt containing stored programs in order to evolve them with a SamplerFlow. This code is an implementation of Funsearch (URL and is heavily inspired by the original code (URL
Configuration Parameters:
- 'name' (str): The name of the flow. Default: "ProgramDBFlow"
- 'description' (str): A description of the flow. This description is used to generate the help message of the flow. Default: " A flow that saves programs in a database of islands"
- 'artifact_to_evolve_name' (str): The name of the artifact/program to evolve. Default: "solve_function"
- 'evaluate_function' (str): The function used to evaluate the program. No Default value. This MUST be passed as a parameter.
- 'evaluate_file_full_content' (str): The full content of the file containing the evaluation function. No Default value. This MUST be passed as a parameter.
- 'num_islands': The number of islands to use. Default: 3
- 'reset_period': The period in seconds to reset the islands. Default: 3600
- 'artifacts_per_prompt': The number of previous artifacts/programs to include in a prompt. Default: 2
- 'temperature': The temperature of the island. Default: 0.1
- 'temperature_period': The period in seconds to change the temperature. Default: 30000
- 'sample_with_replacement': Whether to sample with replacement. Default: False
- 'portion_of_islands_to_reset': The portion of islands to reset. Default: 0.5
- 'template' (dict): The template to use for a program. Default: {"preface": ""}
Input Interface:
- 'operation' (str): The operation to perform. It can be one of the following: ["register_program","get_prompt","get_best_programs_per_island"]
Output Interface:
- 'retrieved' (Any): The retrieved data. It can be one of the following:
- If the operation is "get_prompt", it can be a dictionary with the following keys
- 'code' (str): The code of the prompt
- 'version_generated' (int): The version of the prompt generated
- 'island_id' (int): The id of the island that generated the prompt
- 'header' (str): The header of the prompt
- If the operation is "register_program", it can be a string with the message "Program registered" or "Program failed to register"
- If the operation is "get_best_programs_per_island", it can be a dictionary with the following keys:
- 'best_island_programs' (List[Dict[str,Any]]): A list of dictionaries with the following keys:
- 'rank' (int): The rank of the program (1 is the best)
- 'score' (float): The score of the program
- 'program' (str): The program
- 'island_id' (int): The id of the island that generated the program
Citation:
@Article{FunSearch2023,
author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
journal = {Nature},
title = {Mathematical discoveries from program search with large language models},
year = {2023},
doi = {10.1038/s41586-023-06924-6}
}
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.set_up_flow_state"></a>
#### set\_up\_flow\_state
This method sets up the state of the flow and clears the previous messages.
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_prompt"></a>
#### get\_prompt
This method gets a prompt from an island. It returns the code, the version generated and the island id.
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.reset_islands"></a>
#### reset\_islands
This method resets the islands. It resets the worst islands and copies the best programs to the worst islands.
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.register_program"></a>
#### register\_program
This method registers a program in an island. It also updates the best program if needed.
Arguments:
- 'program' ('AbstractArtifact'): The program to register
- 'island_id' ('int'): The id of the island to register the program
- 'scores_per_test' ('ScoresPerTest'): The scores per test of the program
<a id="ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_best_programs"></a>
#### get\_best\_programs
This method returns the best programs per island.
<a id="URL"></a>
#### run
This method runs the flow. It performs the operation requested in the input message.
<a id="SamplerFlowModule"></a>
# SamplerFlowModule
<a id="SamplerFlowModule.SamplerFlow"></a>
# SamplerFlowModule.SamplerFlow
<a id="SamplerFlowModule.SamplerFlow.SamplerFlow"></a>
## SamplerFlow Objects
This class implements a SamplerFlow. It is a flow that queries a LLM to generate a response to a given input. This class is a child of ChatAtomicFlow.
and expects the same parameters as ChatAtomicFlow (see URL
Configuration Parameters:
- 'name' (str): The name of the flow. Default: "SamplerFlowModule"
- 'description' (str): A description of the flow. Default: "A flow that queries an LLM model to generate prompts for the Sampler flow"
- 'backend' Dict[str,Any]: The backend of the flow. Used to call models via an API.
See litellm's supported models and APIs here: URL
The default parameters of the backend are all defined at aiflows.backends.llm_lite.LiteLLMBackend
(also see the defaul parameters of litellm's completion parameters: URL
Except for the following parameters who are overwritten by the ChatAtomicFlow in URL:
- 'model_name' (Union[Dict[str,str],str]): The name of the model to use. Default: "gpt-4"
When using multiple API providers, the model_name can be a dictionary of the form
{"provider_name": "model_name"}. E.g. {"openai": "gpt-3.5-turbo", "azure": "azure/gpt-3.5-turbo"}
Default: "gpt-3.5-turbo" (the name needs to follow the name of the model in litellm URL
- 'n' (int) : The number of answers to generate. Default: 1
- 'max_tokens' (int): The maximum number of tokens to generate. Default: 2000
- 'temperature' float: The temperature of the generation. Default: 1.0
- 'top_p' float: An alternative to sampling with temperature. It instructs the model to consider the results of
the tokens with top_p probability. Default: 0.4
- 'frequency_penalty' (number): It is used to penalize new tokens based on their frequency in the text so far. Default: 0.0
- 'presence_penalty' (number): It is used to penalize new tokens based on their existence in the text so far. Default: 0.0
- 'stream' (bool): Whether to stream the response or not. Default: false
- 'system_message_prompt_template' (Dict[str,Any]): The template of the system message. It is used to generate the system message. Default: See URL for default.
- 'init_human_message_prompt_template' (Dict[str,Any]): The prompt template of the human/user message used to initialize the conversation
(first time in). It is used to generate the human message. It's passed as the user message to the LLM. Default: See URL for default.
- 'human_message_prompt_template' (Dict[str,Any]): The prompt template of the human/user message (message used everytime the except the first time in).
It's passed as the user message to the LLM. Default: See URL for default.
- 'previous_messages' (Dict[str,Any]): Defines which previous messages to include in the input of the LLM. Note that if 'first_k'and 'last_k' are both none,
all the messages of the flows's history are added to the input of the LLM. Default:
- 'first_k' (int): If defined, adds the first_k earliest messages of the flow's chat history to the input of the LLM. Default: 1
- 'last_k' (int): If defined, adds the last_k latest messages of the flow's chat history to the input of the LLM. Default: 1
*Input Interface Initialized (Expected input the first time in flow)*:
- 'header' (str): A header message to include in prompt
- 'code' (str): The "example" samples to generate our new sample from.
*Input Interface (Expected input the after the first time in flow)*:
- 'header' (str): A header message to include in prompt
- 'code' (str): The "example" samples to generate our new sample from.
*Output Interface*:
- 'api_output' (str): The output of the API call. It is the response of the LLM to the input.
- 'from' (str): The name of the flow that generated the output. It's always "SamplerFlow"
Citation:
@Article{FunSearch2023,
author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
journal = {Nature},
title = {Mathematical discoveries from program search with large language models},
year = {2023},
doi = {10.1038/s41586-023-06924-6}
}
<a id="URL"></a>
#### run
This method calls the backend of the flow (so queries the LLM). It calls the backend with the previous messages of the flow.
Returns:
'Any': The output of the backend.
<a id="EvaluatorFlowModule"></a>
# EvaluatorFlowModule
<a id="EvaluatorFlowModule.EvaluatorFlow"></a>
# EvaluatorFlowModule.EvaluatorFlow
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow"></a>
## EvaluatorFlow Objects
This class implements an EvaluatorFlow. It is a flow that evaluates a program (python code) using a given evaluator function. This code is an implementation of Funsearch (URL and is heavily inspired by the original code (URL
Configuration Parameters:
- 'name' (str): The name of the flow. Default: "EvaluatorFlow"
- 'description' (str): A description of the flow. This description is used to generate the help message of the flow. Default: "A flow that evaluates code on tests"
- 'py_file' (str): The python code containing the evaluation function. No default value. This MUST be passed as a parameter.
- 'function_to_run_name' (str): The name of the function to run (the evaluation function) in the evaluator file. No default value. This MUST be passed as a parameter.
- 'test_inputs' (Dict[str,Any]): A dictionary of test inputs to evaluate the program. Default: {"test1": None, "test2": None}
- 'timeout_seconds' (int): The maximum number of seconds to run the evaluation function before returning an error. Default: 10
- 'run_error_score' (int): The score to return if the evaluation function fails to run. Default: -100
- 'use_test_input_as_key' (bool): Whether to use the test input parameters as the key in the output dictionary. Default: False
Input Interface:
- 'artifact' (str): The program/artifact to evaluate.
Output Interface:
- 'scores_per_test' (Dict[str, Dict[str, Any]]): A dictionary of scores per test input.
Citation:
@Article{FunSearch2023,
author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},
journal = {Nature},
title = {Mathematical discoveries from program search with large language models},
year = {2023},
doi = {10.1038/s41586-023-06924-6}
}
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.load_functions"></a>
#### load\_functions
Load the functions from the evaluator py file with ast parsing
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run_function_with_timeout"></a>
#### run\_function\_with\_timeout
Run the evaluation function with a timeout
Arguments:
- 'program' ('str'): The program to evaluate
- 'kwargs' ('Dict[str, Any]'): The keyword arguments to pass to the evaluation function
Returns:
'Tuple[bool, Any]': A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.evaluate_program"></a>
#### evaluate\_program
Evaluate the program using the evaluation function
Arguments:
- 'program' ('str'): The program to evaluate
- 'kwargs' ('Dict[str, Any]'): The keyword arguments to pass to the evaluation function
Returns:
'Tuple[bool, Any]': A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function
<a id="EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.analyse"></a>
#### analyse
Analyse the program on the test inputs
Arguments:
- 'program' ('str'): The program to evaluate
Returns:
'Dict[str, Dict[str, Any]]': A dictionary of scores per test input
<a id="URL"></a>
#### run
This method runs the flow. It's the main method of the flow.
Arguments:
- 'input_message' ('FlowMessage'): The input message
| [
"# Table of Contents\n\n* FunSearch\n * FunSearch\n * make\\_request\\_for\\_prompt\n * request\\_samplers\n * get\\_next\\_state\n * set\\_up\\_flow\\_state\n * save\\_message\\_to\\_state\n * rename\\_key\\_message\\_in\\_state\n * message\\_in\\_state\n * get\\_message\\_from\\_state\n * pop\\_message\\_from\\_state\n * merge\\_message\\_request\\_state\n * register\\_data\\_to\\_state\n * call\\_program\\_db\n * call\\_evaluator\n * call\\_sampler\n * generate\\_reply\n * run\n* ProgramDBFlowModule\n* ProgramDBFlowModule.ProgramDBFlow\n * ProgramDBFlow\n * set\\_up\\_flow\\_state\n * get\\_prompt\n * reset\\_islands\n * register\\_program\n * get\\_best\\_programs\n * run\n* SamplerFlowModule\n* SamplerFlowModule.SamplerFlow\n * SamplerFlow\n * run\n* EvaluatorFlowModule\n* EvaluatorFlowModule.EvaluatorFlow\n * EvaluatorFlow\n * load\\_functions\n * run\\_function\\_with\\_timeout\n * evaluate\\_program\n * analyse\n * run\n\n<a id=\"FunSearch\"></a>",
"# FunSearch\n\n<a id=\"FunSearch.FunSearch\"></a>",
"## FunSearch Objects\n\n\n\nThis class implements FunSearch. This code is an implementation of Funsearch (URL and is heavily inspired by the original code (URL . It's a Flow in charge of starting, stopping and managing (passing around messages) the FunSearch process. It passes messages around to the following subflows:\n\n- ProgramDBFlow: which is in charge of storing and retrieving programs.\n- SamplerFlow: which is in charge of sampling programs.\n- EvaluatorFlow: which is in charge of evaluating programs.\n\n*Configuration Parameters*:\n\n- 'name' (str): The name of the flow. Default: \"FunSearchFlow\".\n- 'description' (str): The description of the flow. Default: \"A flow implementing FunSearch\"\n- 'subflows_config' (Dict[str,Any]): A dictionary of subflows configurations. Default:\n - 'ProgramDBFlow': By default, it uses the 'ProgramDBFlow' class and uses its default parameters.\n - 'SamplerFlow': By default, it uses the 'SamplerFlow' class and uses its default parameters.\n - 'EvaluatorFlow': By default, it uses the 'EvaluatorFlow' class and uses its default parameters.\n\nInput Interface:\n\n- 'from' (str): The flow from which the message is coming from. It can be one of the following: \"FunSearch\", \"SamplerFlow\", \"EvaluatorFlow\", \"ProgramDBFlow\".\n- 'operation' (str): The operation to perform. It can be one of the following: \"start\", \"stop\", \"get_prompt\", \"get_best_programs_per_island\", \"register_program\".\n- 'content' (Dict[str,Any]): The content associated to an operation. Here is the expected content for each operation:\n - \"start\":\n - 'num_samplers' (int): The number of samplers to start up. Note that it's still restricted by the number of workers available. Default: 1.\n - \"stop\":\n - No content. Pass either an empty dictionary or None. Works also with no content.\n - \"get_prompt\":\n - No content. Pass either an empty dictionary or None. Works also with no content.\n - \"get_best_programs_per_island\":\n - No content. Pass either an empty dictionary or None. Works also with no content.\n\nOutput Interface:\n\n- 'retrieved' (Dict[str,Any]): The retrieved data.\n\nCitation:\n\n@Article{FunSearch2023,\n author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},\n journal = {Nature},\n title = {Mathematical discoveries from program search with large language models},\n year = {2023},\n doi = {10.1038/s41586-023-06924-6}\n}\n\n<a id=\"FunSearch.FunSearch.make_request_for_prompt\"></a>",
"#### make\\_request\\_for\\_prompt\n\n\n\nThis method makes a request for a prompt. It sends a message to itself with the operation \"get_prompt\" which will trigger the flow to call the 'ProgramDBFlow' to get a prompt.\n\n<a id=\"FunSearch.FunSearch.request_samplers\"></a>",
"#### request\\_samplers\n\n\n\nThis method requests samplers. It sends a message to itself with the operation \"get_prompt\" which will trigger the flow to call the 'ProgramDBFlow' to get a prompt.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message that triggered the request for samplers.\n\n<a id=\"FunSearch.FunSearch.get_next_state\"></a>",
"#### get\\_next\\_state\n\n\n\nThis method determines the next state of the flow based on the input message. It will return the next state based on the current state and the message received.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message that triggered the request for the next state.\n\nReturns:\n\n'str': The next state of the flow.\n\n<a id=\"FunSearch.FunSearch.set_up_flow_state\"></a>",
"#### set\\_up\\_flow\\_state\n\n\n\nThis method sets up the state of the flow. It's called at the beginning of the flow.\n\n<a id=\"FunSearch.FunSearch.save_message_to_state\"></a>",
"#### save\\_message\\_to\\_state\n\n\n\nThis method saves a message to the state of the flow. It's used to keep track of state on a per message basis (i.e., state of the flow depending on the message received and id).\n\nArguments:\n\n- 'msg_id' ('str'): The id of the message to save.\n- 'message' ('FlowMessage'): The message to save.\n\n<a id=\"FunSearch.FunSearch.rename_key_message_in_state\"></a>",
"#### rename\\_key\\_message\\_in\\_state\n\n\n\nThis method renames a key in the state of the flow in the \"msg_requests\" dictonary. It's used to rename a key in the state of the flow (i.e., rename a message id).\n\nArguments:\n\n- 'old_key' ('str'): The old key to rename.\n- 'new_key' ('str'): The new key to rename to.\n\n<a id=\"FunSearch.FunSearch.message_in_state\"></a>",
"#### message\\_in\\_state\n\n\n\nThis method checks if a message is in the state of the flow (in \"msg_requests\" dictionary). It returns True if the message is in the state, otherwise it returns False.\n\nArguments:\n\n- 'msg_id' ('str'): The id of the message to check if it's in the state.\n\nReturns:\n\n'bool': True if the message is in the state, otherwise False.\n\n<a id=\"FunSearch.FunSearch.get_message_from_state\"></a>",
"#### get\\_message\\_from\\_state\n\n\n\nThis method returns the state associated with a message id in the state of the flow (in \"msg_requests\" dictionary).\n\nArguments:\n\n- 'msg_id' ('str'): The id of the message to get the state from.\n\nReturns:\n\n'Dict[str,Any]': The state associated with the message id.\n\n<a id=\"FunSearch.FunSearch.pop_message_from_state\"></a>",
"#### pop\\_message\\_from\\_state\n\n\n\nThis method pops a message from the state of the flow (in \"msg_requests\" dictionary). It the state associate to a message and removes it from the state.\n\nArguments:\n\n- 'msg_id' ('str'): The id of the message to pop from the state.\n\nReturns:\n\n'Dict[str,Any]': The state associated with the message id.\n\n<a id=\"FunSearch.FunSearch.merge_message_request_state\"></a>",
"#### merge\\_message\\_request\\_state\n\n\n\nThis method merges new states to a message in the state of the flow (in \"msg_requests\" dictionary). It merges new states to a message in the state.\n\nArguments:\n\n- 'id' ('str'): The id of the message to merge new states to.\n- 'new_states' ('Dict[str,Any]'): The new states to merge to the message.\n\n<a id=\"FunSearch.FunSearch.register_data_to_state\"></a>",
"#### register\\_data\\_to\\_state\n\n\n\nThis method registers the input message data to the flow state. It's called everytime a new input message is received.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message\n\n<a id=\"FunSearch.FunSearch.call_program_db\"></a>",
"#### call\\_program\\_db\n\n\n\nThis method calls the ProgramDBFlow. It sends a message to the ProgramDBFlow with the data of the input message.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message to send to the ProgramDBFlow.\n\n<a id=\"FunSearch.FunSearch.call_evaluator\"></a>",
"#### call\\_evaluator\n\n\n\nThis method calls the EvaluatorFlow. It sends a message to the EvaluatorFlow with the data of the input message.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message to send to the EvaluatorFlow.\n\n<a id=\"FunSearch.FunSearch.call_sampler\"></a>",
"#### call\\_sampler\n\n\n\nThis method calls the SamplerFlow. It sends a message to the SamplerFlow with the data of the input message.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message to send to the SamplerFlow.\n\n<a id=\"FunSearch.FunSearch.generate_reply\"></a>",
"#### generate\\_reply\n\n\n\nThis method generates a reply to a message sent to user. It packages the output message and sends it.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message to generate a reply to.\n\n<a id=\"URL\"></a>",
"#### run\n\n\n\nThis method runs the flow. It's the main method of the flow. It's called when the flow is executed.\n\n<a id=\"ProgramDBFlowModule\"></a>",
"# ProgramDBFlowModule\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow\"></a>",
"## ProgramDBFlow Objects\n\n\n\nThis class implements a ProgramDBFlow. It's a flow that stores programs and their scores in a database. It can also query the database for the best programs or generate a prompt containing stored programs in order to evolve them with a SamplerFlow. This code is an implementation of Funsearch (URL and is heavily inspired by the original code (URL\n\nConfiguration Parameters:\n\n- 'name' (str): The name of the flow. Default: \"ProgramDBFlow\"\n- 'description' (str): A description of the flow. This description is used to generate the help message of the flow. Default: \" A flow that saves programs in a database of islands\"\n- 'artifact_to_evolve_name' (str): The name of the artifact/program to evolve. Default: \"solve_function\"\n- 'evaluate_function' (str): The function used to evaluate the program. No Default value. This MUST be passed as a parameter.\n- 'evaluate_file_full_content' (str): The full content of the file containing the evaluation function. No Default value. This MUST be passed as a parameter.\n- 'num_islands': The number of islands to use. Default: 3\n- 'reset_period': The period in seconds to reset the islands. Default: 3600\n- 'artifacts_per_prompt': The number of previous artifacts/programs to include in a prompt. Default: 2\n- 'temperature': The temperature of the island. Default: 0.1\n- 'temperature_period': The period in seconds to change the temperature. Default: 30000\n- 'sample_with_replacement': Whether to sample with replacement. Default: False\n- 'portion_of_islands_to_reset': The portion of islands to reset. Default: 0.5\n- 'template' (dict): The template to use for a program. Default: {\"preface\": \"\"}\n\nInput Interface:\n\n- 'operation' (str): The operation to perform. It can be one of the following: [\"register_program\",\"get_prompt\",\"get_best_programs_per_island\"]\n\nOutput Interface:\n\n- 'retrieved' (Any): The retrieved data. It can be one of the following:\n - If the operation is \"get_prompt\", it can be a dictionary with the following keys\n - 'code' (str): The code of the prompt\n - 'version_generated' (int): The version of the prompt generated\n - 'island_id' (int): The id of the island that generated the prompt\n - 'header' (str): The header of the prompt\n - If the operation is \"register_program\", it can be a string with the message \"Program registered\" or \"Program failed to register\"\n - If the operation is \"get_best_programs_per_island\", it can be a dictionary with the following keys:\n - 'best_island_programs' (List[Dict[str,Any]]): A list of dictionaries with the following keys:\n - 'rank' (int): The rank of the program (1 is the best)\n - 'score' (float): The score of the program\n - 'program' (str): The program\n - 'island_id' (int): The id of the island that generated the program\n\nCitation:\n\n@Article{FunSearch2023,\n author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},\n journal = {Nature},\n title = {Mathematical discoveries from program search with large language models},\n year = {2023},\n doi = {10.1038/s41586-023-06924-6}\n}\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.set_up_flow_state\"></a>",
"#### set\\_up\\_flow\\_state\n\n\n\nThis method sets up the state of the flow and clears the previous messages.\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_prompt\"></a>",
"#### get\\_prompt\n\n\n\nThis method gets a prompt from an island. It returns the code, the version generated and the island id.\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.reset_islands\"></a>",
"#### reset\\_islands\n\n\n\nThis method resets the islands. It resets the worst islands and copies the best programs to the worst islands.\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.register_program\"></a>",
"#### register\\_program\n\n\n\nThis method registers a program in an island. It also updates the best program if needed.\n\nArguments:\n\n- 'program' ('AbstractArtifact'): The program to register\n- 'island_id' ('int'): The id of the island to register the program\n- 'scores_per_test' ('ScoresPerTest'): The scores per test of the program\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_best_programs\"></a>",
"#### get\\_best\\_programs\n\n\n\nThis method returns the best programs per island.\n\n<a id=\"URL\"></a>",
"#### run\n\n\n\nThis method runs the flow. It performs the operation requested in the input message.\n\n<a id=\"SamplerFlowModule\"></a>",
"# SamplerFlowModule\n\n<a id=\"SamplerFlowModule.SamplerFlow\"></a>",
"# SamplerFlowModule.SamplerFlow\n\n<a id=\"SamplerFlowModule.SamplerFlow.SamplerFlow\"></a>",
"## SamplerFlow Objects\n\n\n\nThis class implements a SamplerFlow. It is a flow that queries a LLM to generate a response to a given input. This class is a child of ChatAtomicFlow.\nand expects the same parameters as ChatAtomicFlow (see URL\n\nConfiguration Parameters:\n- 'name' (str): The name of the flow. Default: \"SamplerFlowModule\"\n- 'description' (str): A description of the flow. Default: \"A flow that queries an LLM model to generate prompts for the Sampler flow\"\n- 'backend' Dict[str,Any]: The backend of the flow. Used to call models via an API.\nSee litellm's supported models and APIs here: URL\nThe default parameters of the backend are all defined at aiflows.backends.llm_lite.LiteLLMBackend\n(also see the defaul parameters of litellm's completion parameters: URL\nExcept for the following parameters who are overwritten by the ChatAtomicFlow in URL:\n - 'model_name' (Union[Dict[str,str],str]): The name of the model to use. Default: \"gpt-4\"\n When using multiple API providers, the model_name can be a dictionary of the form \n {\"provider_name\": \"model_name\"}. E.g. {\"openai\": \"gpt-3.5-turbo\", \"azure\": \"azure/gpt-3.5-turbo\"}\n Default: \"gpt-3.5-turbo\" (the name needs to follow the name of the model in litellm URL\n - 'n' (int) : The number of answers to generate. Default: 1\n - 'max_tokens' (int): The maximum number of tokens to generate. Default: 2000\n - 'temperature' float: The temperature of the generation. Default: 1.0\n - 'top_p' float: An alternative to sampling with temperature. It instructs the model to consider the results of\n the tokens with top_p probability. Default: 0.4\n - 'frequency_penalty' (number): It is used to penalize new tokens based on their frequency in the text so far. Default: 0.0\n - 'presence_penalty' (number): It is used to penalize new tokens based on their existence in the text so far. Default: 0.0\n - 'stream' (bool): Whether to stream the response or not. Default: false\n- 'system_message_prompt_template' (Dict[str,Any]): The template of the system message. It is used to generate the system message. Default: See URL for default.\n- 'init_human_message_prompt_template' (Dict[str,Any]): The prompt template of the human/user message used to initialize the conversation\n(first time in). It is used to generate the human message. It's passed as the user message to the LLM. Default: See URL for default.\n- 'human_message_prompt_template' (Dict[str,Any]): The prompt template of the human/user message (message used everytime the except the first time in).\nIt's passed as the user message to the LLM. Default: See URL for default.\n- 'previous_messages' (Dict[str,Any]): Defines which previous messages to include in the input of the LLM. Note that if 'first_k'and 'last_k' are both none,\nall the messages of the flows's history are added to the input of the LLM. Default:\n - 'first_k' (int): If defined, adds the first_k earliest messages of the flow's chat history to the input of the LLM. Default: 1\n - 'last_k' (int): If defined, adds the last_k latest messages of the flow's chat history to the input of the LLM. Default: 1\n\n*Input Interface Initialized (Expected input the first time in flow)*:\n\n- 'header' (str): A header message to include in prompt\n- 'code' (str): The \"example\" samples to generate our new sample from.\n\n*Input Interface (Expected input the after the first time in flow)*:\n\n- 'header' (str): A header message to include in prompt\n- 'code' (str): The \"example\" samples to generate our new sample from.\n\n*Output Interface*:\n\n- 'api_output' (str): The output of the API call. It is the response of the LLM to the input.\n- 'from' (str): The name of the flow that generated the output. It's always \"SamplerFlow\"\n\n\nCitation:\n\n@Article{FunSearch2023,\n author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},\n journal = {Nature},\n title = {Mathematical discoveries from program search with large language models},\n year = {2023},\n doi = {10.1038/s41586-023-06924-6}\n}\n\n<a id=\"URL\"></a>",
"#### run\n\n\n\nThis method calls the backend of the flow (so queries the LLM). It calls the backend with the previous messages of the flow.\n\nReturns:\n\n'Any': The output of the backend.\n\n<a id=\"EvaluatorFlowModule\"></a>",
"# EvaluatorFlowModule\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow\"></a>",
"# EvaluatorFlowModule.EvaluatorFlow\n\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow\"></a>",
"## EvaluatorFlow Objects\n\n\n\nThis class implements an EvaluatorFlow. It is a flow that evaluates a program (python code) using a given evaluator function. This code is an implementation of Funsearch (URL and is heavily inspired by the original code (URL\n\nConfiguration Parameters:\n\n- 'name' (str): The name of the flow. Default: \"EvaluatorFlow\"\n- 'description' (str): A description of the flow. This description is used to generate the help message of the flow. Default: \"A flow that evaluates code on tests\"\n- 'py_file' (str): The python code containing the evaluation function. No default value. This MUST be passed as a parameter.\n- 'function_to_run_name' (str): The name of the function to run (the evaluation function) in the evaluator file. No default value. This MUST be passed as a parameter.\n- 'test_inputs' (Dict[str,Any]): A dictionary of test inputs to evaluate the program. Default: {\"test1\": None, \"test2\": None}\n- 'timeout_seconds' (int): The maximum number of seconds to run the evaluation function before returning an error. Default: 10\n- 'run_error_score' (int): The score to return if the evaluation function fails to run. Default: -100\n- 'use_test_input_as_key' (bool): Whether to use the test input parameters as the key in the output dictionary. Default: False\n\nInput Interface:\n\n- 'artifact' (str): The program/artifact to evaluate.\n\nOutput Interface:\n\n- 'scores_per_test' (Dict[str, Dict[str, Any]]): A dictionary of scores per test input.\n\nCitation:\n\n@Article{FunSearch2023,\n author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},\n journal = {Nature},\n title = {Mathematical discoveries from program search with large language models},\n year = {2023},\n doi = {10.1038/s41586-023-06924-6}\n}\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.load_functions\"></a>",
"#### load\\_functions\n\n\n\nLoad the functions from the evaluator py file with ast parsing\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run_function_with_timeout\"></a>",
"#### run\\_function\\_with\\_timeout\n\n\n\nRun the evaluation function with a timeout\n\nArguments:\n\n- 'program' ('str'): The program to evaluate\n- 'kwargs' ('Dict[str, Any]'): The keyword arguments to pass to the evaluation function\n\nReturns:\n\n'Tuple[bool, Any]': A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.evaluate_program\"></a>",
"#### evaluate\\_program\n\n\n\nEvaluate the program using the evaluation function\n\nArguments:\n\n- 'program' ('str'): The program to evaluate\n- 'kwargs' ('Dict[str, Any]'): The keyword arguments to pass to the evaluation function\n\nReturns:\n\n'Tuple[bool, Any]': A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.analyse\"></a>",
"#### analyse\n\n\n\nAnalyse the program on the test inputs\n\nArguments:\n\n- 'program' ('str'): The program to evaluate\n\nReturns:\n\n'Dict[str, Dict[str, Any]]': A dictionary of scores per test input\n\n<a id=\"URL\"></a>",
"#### run\n\n\n\nThis method runs the flow. It's the main method of the flow.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message"
] | [
"TAGS\n#license-mit #region-us \n",
"# Table of Contents\n\n* FunSearch\n * FunSearch\n * make\\_request\\_for\\_prompt\n * request\\_samplers\n * get\\_next\\_state\n * set\\_up\\_flow\\_state\n * save\\_message\\_to\\_state\n * rename\\_key\\_message\\_in\\_state\n * message\\_in\\_state\n * get\\_message\\_from\\_state\n * pop\\_message\\_from\\_state\n * merge\\_message\\_request\\_state\n * register\\_data\\_to\\_state\n * call\\_program\\_db\n * call\\_evaluator\n * call\\_sampler\n * generate\\_reply\n * run\n* ProgramDBFlowModule\n* ProgramDBFlowModule.ProgramDBFlow\n * ProgramDBFlow\n * set\\_up\\_flow\\_state\n * get\\_prompt\n * reset\\_islands\n * register\\_program\n * get\\_best\\_programs\n * run\n* SamplerFlowModule\n* SamplerFlowModule.SamplerFlow\n * SamplerFlow\n * run\n* EvaluatorFlowModule\n* EvaluatorFlowModule.EvaluatorFlow\n * EvaluatorFlow\n * load\\_functions\n * run\\_function\\_with\\_timeout\n * evaluate\\_program\n * analyse\n * run\n\n<a id=\"FunSearch\"></a>",
"# FunSearch\n\n<a id=\"FunSearch.FunSearch\"></a>",
"## FunSearch Objects\n\n\n\nThis class implements FunSearch. This code is an implementation of Funsearch (URL and is heavily inspired by the original code (URL . It's a Flow in charge of starting, stopping and managing (passing around messages) the FunSearch process. It passes messages around to the following subflows:\n\n- ProgramDBFlow: which is in charge of storing and retrieving programs.\n- SamplerFlow: which is in charge of sampling programs.\n- EvaluatorFlow: which is in charge of evaluating programs.\n\n*Configuration Parameters*:\n\n- 'name' (str): The name of the flow. Default: \"FunSearchFlow\".\n- 'description' (str): The description of the flow. Default: \"A flow implementing FunSearch\"\n- 'subflows_config' (Dict[str,Any]): A dictionary of subflows configurations. Default:\n - 'ProgramDBFlow': By default, it uses the 'ProgramDBFlow' class and uses its default parameters.\n - 'SamplerFlow': By default, it uses the 'SamplerFlow' class and uses its default parameters.\n - 'EvaluatorFlow': By default, it uses the 'EvaluatorFlow' class and uses its default parameters.\n\nInput Interface:\n\n- 'from' (str): The flow from which the message is coming from. It can be one of the following: \"FunSearch\", \"SamplerFlow\", \"EvaluatorFlow\", \"ProgramDBFlow\".\n- 'operation' (str): The operation to perform. It can be one of the following: \"start\", \"stop\", \"get_prompt\", \"get_best_programs_per_island\", \"register_program\".\n- 'content' (Dict[str,Any]): The content associated to an operation. Here is the expected content for each operation:\n - \"start\":\n - 'num_samplers' (int): The number of samplers to start up. Note that it's still restricted by the number of workers available. Default: 1.\n - \"stop\":\n - No content. Pass either an empty dictionary or None. Works also with no content.\n - \"get_prompt\":\n - No content. Pass either an empty dictionary or None. Works also with no content.\n - \"get_best_programs_per_island\":\n - No content. Pass either an empty dictionary or None. Works also with no content.\n\nOutput Interface:\n\n- 'retrieved' (Dict[str,Any]): The retrieved data.\n\nCitation:\n\n@Article{FunSearch2023,\n author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},\n journal = {Nature},\n title = {Mathematical discoveries from program search with large language models},\n year = {2023},\n doi = {10.1038/s41586-023-06924-6}\n}\n\n<a id=\"FunSearch.FunSearch.make_request_for_prompt\"></a>",
"#### make\\_request\\_for\\_prompt\n\n\n\nThis method makes a request for a prompt. It sends a message to itself with the operation \"get_prompt\" which will trigger the flow to call the 'ProgramDBFlow' to get a prompt.\n\n<a id=\"FunSearch.FunSearch.request_samplers\"></a>",
"#### request\\_samplers\n\n\n\nThis method requests samplers. It sends a message to itself with the operation \"get_prompt\" which will trigger the flow to call the 'ProgramDBFlow' to get a prompt.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message that triggered the request for samplers.\n\n<a id=\"FunSearch.FunSearch.get_next_state\"></a>",
"#### get\\_next\\_state\n\n\n\nThis method determines the next state of the flow based on the input message. It will return the next state based on the current state and the message received.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message that triggered the request for the next state.\n\nReturns:\n\n'str': The next state of the flow.\n\n<a id=\"FunSearch.FunSearch.set_up_flow_state\"></a>",
"#### set\\_up\\_flow\\_state\n\n\n\nThis method sets up the state of the flow. It's called at the beginning of the flow.\n\n<a id=\"FunSearch.FunSearch.save_message_to_state\"></a>",
"#### save\\_message\\_to\\_state\n\n\n\nThis method saves a message to the state of the flow. It's used to keep track of state on a per message basis (i.e., state of the flow depending on the message received and id).\n\nArguments:\n\n- 'msg_id' ('str'): The id of the message to save.\n- 'message' ('FlowMessage'): The message to save.\n\n<a id=\"FunSearch.FunSearch.rename_key_message_in_state\"></a>",
"#### rename\\_key\\_message\\_in\\_state\n\n\n\nThis method renames a key in the state of the flow in the \"msg_requests\" dictonary. It's used to rename a key in the state of the flow (i.e., rename a message id).\n\nArguments:\n\n- 'old_key' ('str'): The old key to rename.\n- 'new_key' ('str'): The new key to rename to.\n\n<a id=\"FunSearch.FunSearch.message_in_state\"></a>",
"#### message\\_in\\_state\n\n\n\nThis method checks if a message is in the state of the flow (in \"msg_requests\" dictionary). It returns True if the message is in the state, otherwise it returns False.\n\nArguments:\n\n- 'msg_id' ('str'): The id of the message to check if it's in the state.\n\nReturns:\n\n'bool': True if the message is in the state, otherwise False.\n\n<a id=\"FunSearch.FunSearch.get_message_from_state\"></a>",
"#### get\\_message\\_from\\_state\n\n\n\nThis method returns the state associated with a message id in the state of the flow (in \"msg_requests\" dictionary).\n\nArguments:\n\n- 'msg_id' ('str'): The id of the message to get the state from.\n\nReturns:\n\n'Dict[str,Any]': The state associated with the message id.\n\n<a id=\"FunSearch.FunSearch.pop_message_from_state\"></a>",
"#### pop\\_message\\_from\\_state\n\n\n\nThis method pops a message from the state of the flow (in \"msg_requests\" dictionary). It the state associate to a message and removes it from the state.\n\nArguments:\n\n- 'msg_id' ('str'): The id of the message to pop from the state.\n\nReturns:\n\n'Dict[str,Any]': The state associated with the message id.\n\n<a id=\"FunSearch.FunSearch.merge_message_request_state\"></a>",
"#### merge\\_message\\_request\\_state\n\n\n\nThis method merges new states to a message in the state of the flow (in \"msg_requests\" dictionary). It merges new states to a message in the state.\n\nArguments:\n\n- 'id' ('str'): The id of the message to merge new states to.\n- 'new_states' ('Dict[str,Any]'): The new states to merge to the message.\n\n<a id=\"FunSearch.FunSearch.register_data_to_state\"></a>",
"#### register\\_data\\_to\\_state\n\n\n\nThis method registers the input message data to the flow state. It's called everytime a new input message is received.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message\n\n<a id=\"FunSearch.FunSearch.call_program_db\"></a>",
"#### call\\_program\\_db\n\n\n\nThis method calls the ProgramDBFlow. It sends a message to the ProgramDBFlow with the data of the input message.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message to send to the ProgramDBFlow.\n\n<a id=\"FunSearch.FunSearch.call_evaluator\"></a>",
"#### call\\_evaluator\n\n\n\nThis method calls the EvaluatorFlow. It sends a message to the EvaluatorFlow with the data of the input message.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message to send to the EvaluatorFlow.\n\n<a id=\"FunSearch.FunSearch.call_sampler\"></a>",
"#### call\\_sampler\n\n\n\nThis method calls the SamplerFlow. It sends a message to the SamplerFlow with the data of the input message.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message to send to the SamplerFlow.\n\n<a id=\"FunSearch.FunSearch.generate_reply\"></a>",
"#### generate\\_reply\n\n\n\nThis method generates a reply to a message sent to user. It packages the output message and sends it.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message to generate a reply to.\n\n<a id=\"URL\"></a>",
"#### run\n\n\n\nThis method runs the flow. It's the main method of the flow. It's called when the flow is executed.\n\n<a id=\"ProgramDBFlowModule\"></a>",
"# ProgramDBFlowModule\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow\"></a>",
"## ProgramDBFlow Objects\n\n\n\nThis class implements a ProgramDBFlow. It's a flow that stores programs and their scores in a database. It can also query the database for the best programs or generate a prompt containing stored programs in order to evolve them with a SamplerFlow. This code is an implementation of Funsearch (URL and is heavily inspired by the original code (URL\n\nConfiguration Parameters:\n\n- 'name' (str): The name of the flow. Default: \"ProgramDBFlow\"\n- 'description' (str): A description of the flow. This description is used to generate the help message of the flow. Default: \" A flow that saves programs in a database of islands\"\n- 'artifact_to_evolve_name' (str): The name of the artifact/program to evolve. Default: \"solve_function\"\n- 'evaluate_function' (str): The function used to evaluate the program. No Default value. This MUST be passed as a parameter.\n- 'evaluate_file_full_content' (str): The full content of the file containing the evaluation function. No Default value. This MUST be passed as a parameter.\n- 'num_islands': The number of islands to use. Default: 3\n- 'reset_period': The period in seconds to reset the islands. Default: 3600\n- 'artifacts_per_prompt': The number of previous artifacts/programs to include in a prompt. Default: 2\n- 'temperature': The temperature of the island. Default: 0.1\n- 'temperature_period': The period in seconds to change the temperature. Default: 30000\n- 'sample_with_replacement': Whether to sample with replacement. Default: False\n- 'portion_of_islands_to_reset': The portion of islands to reset. Default: 0.5\n- 'template' (dict): The template to use for a program. Default: {\"preface\": \"\"}\n\nInput Interface:\n\n- 'operation' (str): The operation to perform. It can be one of the following: [\"register_program\",\"get_prompt\",\"get_best_programs_per_island\"]\n\nOutput Interface:\n\n- 'retrieved' (Any): The retrieved data. It can be one of the following:\n - If the operation is \"get_prompt\", it can be a dictionary with the following keys\n - 'code' (str): The code of the prompt\n - 'version_generated' (int): The version of the prompt generated\n - 'island_id' (int): The id of the island that generated the prompt\n - 'header' (str): The header of the prompt\n - If the operation is \"register_program\", it can be a string with the message \"Program registered\" or \"Program failed to register\"\n - If the operation is \"get_best_programs_per_island\", it can be a dictionary with the following keys:\n - 'best_island_programs' (List[Dict[str,Any]]): A list of dictionaries with the following keys:\n - 'rank' (int): The rank of the program (1 is the best)\n - 'score' (float): The score of the program\n - 'program' (str): The program\n - 'island_id' (int): The id of the island that generated the program\n\nCitation:\n\n@Article{FunSearch2023,\n author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},\n journal = {Nature},\n title = {Mathematical discoveries from program search with large language models},\n year = {2023},\n doi = {10.1038/s41586-023-06924-6}\n}\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.set_up_flow_state\"></a>",
"#### set\\_up\\_flow\\_state\n\n\n\nThis method sets up the state of the flow and clears the previous messages.\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_prompt\"></a>",
"#### get\\_prompt\n\n\n\nThis method gets a prompt from an island. It returns the code, the version generated and the island id.\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.reset_islands\"></a>",
"#### reset\\_islands\n\n\n\nThis method resets the islands. It resets the worst islands and copies the best programs to the worst islands.\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.register_program\"></a>",
"#### register\\_program\n\n\n\nThis method registers a program in an island. It also updates the best program if needed.\n\nArguments:\n\n- 'program' ('AbstractArtifact'): The program to register\n- 'island_id' ('int'): The id of the island to register the program\n- 'scores_per_test' ('ScoresPerTest'): The scores per test of the program\n\n<a id=\"ProgramDBFlowModule.ProgramDBFlow.ProgramDBFlow.get_best_programs\"></a>",
"#### get\\_best\\_programs\n\n\n\nThis method returns the best programs per island.\n\n<a id=\"URL\"></a>",
"#### run\n\n\n\nThis method runs the flow. It performs the operation requested in the input message.\n\n<a id=\"SamplerFlowModule\"></a>",
"# SamplerFlowModule\n\n<a id=\"SamplerFlowModule.SamplerFlow\"></a>",
"# SamplerFlowModule.SamplerFlow\n\n<a id=\"SamplerFlowModule.SamplerFlow.SamplerFlow\"></a>",
"## SamplerFlow Objects\n\n\n\nThis class implements a SamplerFlow. It is a flow that queries a LLM to generate a response to a given input. This class is a child of ChatAtomicFlow.\nand expects the same parameters as ChatAtomicFlow (see URL\n\nConfiguration Parameters:\n- 'name' (str): The name of the flow. Default: \"SamplerFlowModule\"\n- 'description' (str): A description of the flow. Default: \"A flow that queries an LLM model to generate prompts for the Sampler flow\"\n- 'backend' Dict[str,Any]: The backend of the flow. Used to call models via an API.\nSee litellm's supported models and APIs here: URL\nThe default parameters of the backend are all defined at aiflows.backends.llm_lite.LiteLLMBackend\n(also see the defaul parameters of litellm's completion parameters: URL\nExcept for the following parameters who are overwritten by the ChatAtomicFlow in URL:\n - 'model_name' (Union[Dict[str,str],str]): The name of the model to use. Default: \"gpt-4\"\n When using multiple API providers, the model_name can be a dictionary of the form \n {\"provider_name\": \"model_name\"}. E.g. {\"openai\": \"gpt-3.5-turbo\", \"azure\": \"azure/gpt-3.5-turbo\"}\n Default: \"gpt-3.5-turbo\" (the name needs to follow the name of the model in litellm URL\n - 'n' (int) : The number of answers to generate. Default: 1\n - 'max_tokens' (int): The maximum number of tokens to generate. Default: 2000\n - 'temperature' float: The temperature of the generation. Default: 1.0\n - 'top_p' float: An alternative to sampling with temperature. It instructs the model to consider the results of\n the tokens with top_p probability. Default: 0.4\n - 'frequency_penalty' (number): It is used to penalize new tokens based on their frequency in the text so far. Default: 0.0\n - 'presence_penalty' (number): It is used to penalize new tokens based on their existence in the text so far. Default: 0.0\n - 'stream' (bool): Whether to stream the response or not. Default: false\n- 'system_message_prompt_template' (Dict[str,Any]): The template of the system message. It is used to generate the system message. Default: See URL for default.\n- 'init_human_message_prompt_template' (Dict[str,Any]): The prompt template of the human/user message used to initialize the conversation\n(first time in). It is used to generate the human message. It's passed as the user message to the LLM. Default: See URL for default.\n- 'human_message_prompt_template' (Dict[str,Any]): The prompt template of the human/user message (message used everytime the except the first time in).\nIt's passed as the user message to the LLM. Default: See URL for default.\n- 'previous_messages' (Dict[str,Any]): Defines which previous messages to include in the input of the LLM. Note that if 'first_k'and 'last_k' are both none,\nall the messages of the flows's history are added to the input of the LLM. Default:\n - 'first_k' (int): If defined, adds the first_k earliest messages of the flow's chat history to the input of the LLM. Default: 1\n - 'last_k' (int): If defined, adds the last_k latest messages of the flow's chat history to the input of the LLM. Default: 1\n\n*Input Interface Initialized (Expected input the first time in flow)*:\n\n- 'header' (str): A header message to include in prompt\n- 'code' (str): The \"example\" samples to generate our new sample from.\n\n*Input Interface (Expected input the after the first time in flow)*:\n\n- 'header' (str): A header message to include in prompt\n- 'code' (str): The \"example\" samples to generate our new sample from.\n\n*Output Interface*:\n\n- 'api_output' (str): The output of the API call. It is the response of the LLM to the input.\n- 'from' (str): The name of the flow that generated the output. It's always \"SamplerFlow\"\n\n\nCitation:\n\n@Article{FunSearch2023,\n author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},\n journal = {Nature},\n title = {Mathematical discoveries from program search with large language models},\n year = {2023},\n doi = {10.1038/s41586-023-06924-6}\n}\n\n<a id=\"URL\"></a>",
"#### run\n\n\n\nThis method calls the backend of the flow (so queries the LLM). It calls the backend with the previous messages of the flow.\n\nReturns:\n\n'Any': The output of the backend.\n\n<a id=\"EvaluatorFlowModule\"></a>",
"# EvaluatorFlowModule\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow\"></a>",
"# EvaluatorFlowModule.EvaluatorFlow\n\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow\"></a>",
"## EvaluatorFlow Objects\n\n\n\nThis class implements an EvaluatorFlow. It is a flow that evaluates a program (python code) using a given evaluator function. This code is an implementation of Funsearch (URL and is heavily inspired by the original code (URL\n\nConfiguration Parameters:\n\n- 'name' (str): The name of the flow. Default: \"EvaluatorFlow\"\n- 'description' (str): A description of the flow. This description is used to generate the help message of the flow. Default: \"A flow that evaluates code on tests\"\n- 'py_file' (str): The python code containing the evaluation function. No default value. This MUST be passed as a parameter.\n- 'function_to_run_name' (str): The name of the function to run (the evaluation function) in the evaluator file. No default value. This MUST be passed as a parameter.\n- 'test_inputs' (Dict[str,Any]): A dictionary of test inputs to evaluate the program. Default: {\"test1\": None, \"test2\": None}\n- 'timeout_seconds' (int): The maximum number of seconds to run the evaluation function before returning an error. Default: 10\n- 'run_error_score' (int): The score to return if the evaluation function fails to run. Default: -100\n- 'use_test_input_as_key' (bool): Whether to use the test input parameters as the key in the output dictionary. Default: False\n\nInput Interface:\n\n- 'artifact' (str): The program/artifact to evaluate.\n\nOutput Interface:\n\n- 'scores_per_test' (Dict[str, Dict[str, Any]]): A dictionary of scores per test input.\n\nCitation:\n\n@Article{FunSearch2023,\n author = {Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Balog, Matej and Kumar, M. Pawan and Dupont, Emilien and Ruiz, Francisco J. R. and Ellenberg, Jordan and Wang, Pengming and Fawzi, Omar and Kohli, Pushmeet and Fawzi, Alhussein},\n journal = {Nature},\n title = {Mathematical discoveries from program search with large language models},\n year = {2023},\n doi = {10.1038/s41586-023-06924-6}\n}\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.load_functions\"></a>",
"#### load\\_functions\n\n\n\nLoad the functions from the evaluator py file with ast parsing\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.run_function_with_timeout\"></a>",
"#### run\\_function\\_with\\_timeout\n\n\n\nRun the evaluation function with a timeout\n\nArguments:\n\n- 'program' ('str'): The program to evaluate\n- 'kwargs' ('Dict[str, Any]'): The keyword arguments to pass to the evaluation function\n\nReturns:\n\n'Tuple[bool, Any]': A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.evaluate_program\"></a>",
"#### evaluate\\_program\n\n\n\nEvaluate the program using the evaluation function\n\nArguments:\n\n- 'program' ('str'): The program to evaluate\n- 'kwargs' ('Dict[str, Any]'): The keyword arguments to pass to the evaluation function\n\nReturns:\n\n'Tuple[bool, Any]': A tuple (bool, result) where bool is True if the function ran successfully and result is the output of the function\n\n<a id=\"EvaluatorFlowModule.EvaluatorFlow.EvaluatorFlow.analyse\"></a>",
"#### analyse\n\n\n\nAnalyse the program on the test inputs\n\nArguments:\n\n- 'program' ('str'): The program to evaluate\n\nReturns:\n\n'Dict[str, Dict[str, Any]]': A dictionary of scores per test input\n\n<a id=\"URL\"></a>",
"#### run\n\n\n\nThis method runs the flow. It's the main method of the flow.\n\nArguments:\n\n- 'input_message' ('FlowMessage'): The input message"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Meta-Llama-3-8B"} | PrunaAI/NousResearch-Meta-Llama-3-8B-AWQ-4bit-smashed | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"base_model:NousResearch/Meta-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-29T12:44:34+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #pruna-ai #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "meta-llama/Meta-Llama-3-8B"} | PrunaAI/meta-llama-Meta-Llama-3-8B-HQQ-1bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"base_model:meta-llama/Meta-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:44:34+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #base_model-meta-llama/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #base_model-meta-llama/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"} | PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:44:52+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-Instruct-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3-8B-Instruct")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Meta-Llama-3-8B-Instruct"} | PrunaAI/NousResearch-Meta-Llama-3-8B-Instruct-AWQ-4bit-smashed | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-29T12:45:17+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #pruna-ai #conversational #base_model-NousResearch/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #conversational #base_model-NousResearch/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kawagoshi-llm-team/12B_step2000 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:45:21+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"} | PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-HQQ-1bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:45:34+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "meta-llama/Meta-Llama-3-8B"} | PrunaAI/meta-llama-Meta-Llama-3-8B-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"base_model:meta-llama/Meta-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T12:50:18+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #base_model-meta-llama/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #base_model-meta-llama/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ap-normistral-7b-sft-qlora
This model is a fine-tuned version of [norallm/normistral-7b-warm](https://huggingface.co/norallm/normistral-7b-warm) on the hugodk-sch/aftonposten_title_sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.063 | 1.0 | 264 | 2.1618 |
| 1.2293 | 2.0 | 528 | 1.9121 |
| 0.6985 | 3.0 | 792 | 1.6916 |
| 0.4922 | 4.0 | 1056 | 1.6054 |
| 0.3396 | 5.0 | 1320 | 1.6055 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["hugodk-sch/aftonposten_title_sft"], "base_model": "norallm/normistral-7b-warm", "model-index": [{"name": "ap-normistral-7b-sft-qlora", "results": []}]} | hugodk-sch/ap-normistral-7b-sft-qlora | null | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:hugodk-sch/aftonposten_title_sft",
"base_model:norallm/normistral-7b-warm",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2024-04-29T12:53:30+00:00 | [] | [] | TAGS
#peft #safetensors #mistral #alignment-handbook #trl #sft #generated_from_trainer #dataset-hugodk-sch/aftonposten_title_sft #base_model-norallm/normistral-7b-warm #license-apache-2.0 #4-bit #region-us
| ap-normistral-7b-sft-qlora
==========================
This model is a fine-tuned version of norallm/normistral-7b-warm on the hugodk-sch/aftonposten\_title\_sft dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6055
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1"
] | [
"TAGS\n#peft #safetensors #mistral #alignment-handbook #trl #sft #generated_from_trainer #dataset-hugodk-sch/aftonposten_title_sft #base_model-norallm/normistral-7b-warm #license-apache-2.0 #4-bit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-feelings-text-classifier-eng
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3930
- Accuracy: 0.9186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3309 | 1.0 | 564 | 0.2348 | 0.9193 |
| 0.193 | 2.0 | 1128 | 0.2359 | 0.9226 |
| 0.1566 | 3.0 | 1692 | 0.2517 | 0.9241 |
| 0.1275 | 4.0 | 2256 | 0.2629 | 0.9256 |
| 0.1021 | 5.0 | 2820 | 0.3023 | 0.9219 |
| 0.0834 | 6.0 | 3384 | 0.3145 | 0.9193 |
| 0.0679 | 7.0 | 3948 | 0.3338 | 0.9219 |
| 0.0543 | 8.0 | 4512 | 0.3695 | 0.9194 |
| 0.0475 | 9.0 | 5076 | 0.3795 | 0.9198 |
| 0.0389 | 10.0 | 5640 | 0.3930 | 0.9186 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "distilbert-base-cased-finetuned-feelings-text-classifier-eng", "results": []}]} | LukeGPT88/feelings-text-classifier-eng | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:53:46+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-cased-finetuned-feelings-text-classifier-eng
============================================================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3930
* Accuracy: 0.9186
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ellis-v3-emotion-leadership-multi-label
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1095
- Accuracy: 0.9728
- F1: 0.9318
- Precision: 0.9346
- Recall: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1396 | 1.0 | 5910 | 0.1235 | 0.9669 | 0.9169 | 0.9211 | 0.9127 |
| 0.1025 | 2.0 | 11820 | 0.1095 | 0.9728 | 0.9318 | 0.9346 | 0.9289 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "microsoft/deberta-v3-small", "model-index": [{"name": "ellis-v3-emotion-leadership-multi-label", "results": []}]} | gsl22/ellis-v3-emotion-leadership-multi-label | null | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:54:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-small #license-mit #autotrain_compatible #endpoints_compatible #region-us
| ellis-v3-emotion-leadership-multi-label
=======================================
This model is a fine-tuned version of microsoft/deberta-v3-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1095
* Accuracy: 0.9728
* F1: 0.9318
* Precision: 0.9346
* Recall: 0.9289
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 3
* eval\_batch\_size: 3
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-small #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | GaussianMixture/llama3_tokenizer | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:56:42+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | EpicJhon/llama_231 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:57:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
## Introduction
This repository provides the baseline model files for CNVSRC2024 (Chinese Continuous Visual Speech Recognition Challenge 2024).
## Usage
Please download these model files and use them in the [baseline code](https://github.com/sectum1919/CNVSRC2024Baseline).
## Performance
The following table shows these models' performance on their own tasks.
| Training Data | Task | CER | File Name |
|:-------------------------:|:----------------------:|:------:|:-----------------------------------------|
| CN-CVS (<4s) | Pre-training | / | model_avg_14_23_cncvs_4s.pth |
| CN-CVS (full) | Pre-training | / | model_avg_last10_cncvs_4s_30s.pth |
| CN-CVS + CNVSRC-Single.Dev| Single-speaker VSR (T1)| 39.66% | model_avg_last5_cncvs_cnvsrc-single.pth |
| CN-CVS + CNVSRC-Multi.Dev | Multi-speaker VSR (T2)| 52.20% | model_avg_last5_cncvs_cnvsrc-multi.pth | | {"metrics": ["cer"]} | chenchen2121/CNVSRC2024Baseline | null | [
"region:us"
] | null | 2024-04-29T12:57:59+00:00 | [] | [] | TAGS
#region-us
| Introduction
------------
This repository provides the baseline model files for CNVSRC2024 (Chinese Continuous Visual Speech Recognition Challenge 2024).
Usage
-----
Please download these model files and use them in the baseline code.
Performance
-----------
The following table shows these models' performance on their own tasks.
| [] | [
"TAGS\n#region-us \n"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1775
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.353 | 1.0 | 694 | 0.2625 | 0.8918 |
| 0.3266 | 2.0 | 1388 | 0.1964 | 0.9224 |
| 0.2636 | 3.0 | 2082 | 0.1775 | 0.9320 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224", "model-index": [{"name": "vit-base-patch16-224-finetuned-eurosat", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9320024321037698, "name": "Accuracy"}]}]}]} | mansee/vit-base-patch16-224-finetuned-eurosat | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:58:13+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-google/vit-base-patch16-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| vit-base-patch16-224-finetuned-eurosat
======================================
This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1775
* Accuracy: 0.9320
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.0.1+cu117
* Datasets 2.18.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-google/vit-base-patch16-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers | ## TextAttack Model Card
This `bert` model was fine-tuned using TextAttack. The model was fine-tuned
for 3 epochs with a batch size of 8,
a maximum sequence length of 512, and an initial learning rate of 3e-05.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9656666666666667, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack). | {"language": ["zh"], "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-classification"} | WangA/bert-base-finetuned-ctrip | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T12:58:37+00:00 | [] | [
"zh"
] | TAGS
#transformers #safetensors #bert #text-classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ## TextAttack Model Card
This 'bert' model was fine-tuned using TextAttack. The model was fine-tuned
for 3 epochs with a batch size of 8,
a maximum sequence length of 512, and an initial learning rate of 3e-05.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9656666666666667, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out TextAttack on Github. | [
"## TextAttack Model Card\n\n This 'bert' model was fine-tuned using TextAttack. The model was fine-tuned\n for 3 epochs with a batch size of 8,\n a maximum sequence length of 512, and an initial learning rate of 3e-05.\n Since this was a classification task, the model was trained with a cross-entropy loss function.\n The best score the model achieved on this task was 0.9656666666666667, as measured by the\n eval set accuracy, found after 3 epochs.\n\n For more information, check out TextAttack on Github."
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## TextAttack Model Card\n\n This 'bert' model was fine-tuned using TextAttack. The model was fine-tuned\n for 3 epochs with a batch size of 8,\n a maximum sequence length of 512, and an initial learning rate of 3e-05.\n Since this was a classification task, the model was trained with a cross-entropy loss function.\n The best score the model achieved on this task was 0.9656666666666667, as measured by the\n eval set accuracy, found after 3 epochs.\n\n For more information, check out TextAttack on Github."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vanilla_dpo_iter_1
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the updated and the original datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6274
- Rewards/chosen: -0.0991
- Rewards/rejected: -0.2700
- Rewards/accuracies: 0.6800
- Rewards/margins: 0.1709
- Logps/rejected: -284.5185
- Logps/chosen: -293.9490
- Logits/rejected: -2.5507
- Logits/chosen: -2.6341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6186 | 0.63 | 100 | 0.6274 | -0.0991 | -0.2700 | 0.6800 | 0.1709 | -284.5185 | -293.9490 | -2.5507 | -2.6341 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["updated", "original"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "vanilla_dpo_iter_1", "results": []}]} | YYYYYYibo/vanilla_dpo_iter_1 | null | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-04-29T13:00:28+00:00 | [] | [] | TAGS
#peft #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-updated #dataset-original #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us
| vanilla\_dpo\_iter\_1
=====================
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the updated and the original datasets.
It achieves the following results on the evaluation set:
* Loss: 0.6274
* Rewards/chosen: -0.0991
* Rewards/rejected: -0.2700
* Rewards/accuracies: 0.6800
* Rewards/margins: 0.1709
* Logps/rejected: -284.5185
* Logps/chosen: -293.9490
* Logits/rejected: -2.5507
* Logits/chosen: -2.6341
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* total\_eval\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-updated #dataset-original #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/k1h4uct | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:01:44+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/0rhofi7 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:01:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/7p5e6xs | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:01:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo aaditya/Llama3-OpenBioLLM-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/aaditya-Llama3-OpenBioLLM-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("aaditya/Llama3-OpenBioLLM-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model aaditya/Llama3-OpenBioLLM-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "aaditya/Llama3-OpenBioLLM-8B"} | PrunaAI/aaditya-Llama3-OpenBioLLM-8B-AWQ-4bit-smashed | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-29T13:01:56+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #pruna-ai #base_model-aaditya/Llama3-OpenBioLLM-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo aaditya/Llama3-OpenBioLLM-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model aaditya/Llama3-OpenBioLLM-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo aaditya/Llama3-OpenBioLLM-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model aaditya/Llama3-OpenBioLLM-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #base_model-aaditya/Llama3-OpenBioLLM-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with awq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo aaditya/Llama3-OpenBioLLM-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model aaditya/Llama3-OpenBioLLM-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | golf2248/0clmk5d | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:01:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification | transformers |
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
No validation metrics available
| {"tags": ["autotrain", "image-classification"], "datasets": ["as-cle-bert/breastcanc-ultrasound-class"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]} | as-cle-bert/bcus-class-segformer | null | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"image-classification",
"autotrain",
"dataset:as-cle-bert/breastcanc-ultrasound-class",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:02:16+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #segformer #image-classification #autotrain #dataset-as-cle-bert/breastcanc-ultrasound-class #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
No validation metrics available
| [
"# Model Trained Using AutoTrain\n\n- Problem type: Image Classification",
"## Validation Metrics\nNo validation metrics available"
] | [
"TAGS\n#transformers #tensorboard #safetensors #segformer #image-classification #autotrain #dataset-as-cle-bert/breastcanc-ultrasound-class #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\n- Problem type: Image Classification",
"## Validation Metrics\nNo validation metrics available"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_V1-roberta-text-classification-model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2641
- Accuracy: 0.9502
- F1: 0.8186
- Precision: 0.8164
- Recall: 0.8225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6615 | 0.11 | 50 | 1.5503 | 0.3199 | 0.1016 | 0.2599 | 0.1518 |
| 0.6244 | 0.22 | 100 | 0.7198 | 0.7692 | 0.4656 | 0.4593 | 0.4818 |
| 0.3344 | 0.33 | 150 | 0.4852 | 0.8893 | 0.6484 | 0.6264 | 0.6733 |
| 0.2596 | 0.44 | 200 | 0.5277 | 0.8805 | 0.6398 | 0.6124 | 0.6748 |
| 0.2173 | 0.55 | 250 | 0.4417 | 0.8849 | 0.6577 | 0.6421 | 0.6750 |
| 0.2393 | 0.66 | 300 | 0.5221 | 0.8707 | 0.6511 | 0.6361 | 0.6684 |
| 0.2229 | 0.76 | 350 | 0.4997 | 0.8928 | 0.6602 | 0.6410 | 0.6814 |
| 0.1482 | 0.87 | 400 | 0.5111 | 0.8983 | 0.6409 | 0.6131 | 0.6810 |
| 0.1831 | 0.98 | 450 | 0.4251 | 0.8827 | 0.6827 | 0.7149 | 0.6957 |
| 0.1882 | 1.09 | 500 | 0.4130 | 0.9043 | 0.6805 | 0.7998 | 0.6878 |
| 0.1182 | 1.2 | 550 | 0.4513 | 0.9076 | 0.6973 | 0.7703 | 0.7004 |
| 0.101 | 1.31 | 600 | 0.3402 | 0.9221 | 0.7040 | 0.8097 | 0.7036 |
| 0.0749 | 1.42 | 650 | 0.1566 | 0.9658 | 0.8229 | 0.8350 | 0.8122 |
| 0.1294 | 1.53 | 700 | 0.1586 | 0.9675 | 0.8336 | 0.8327 | 0.8346 |
| 0.046 | 1.64 | 750 | 0.2010 | 0.9604 | 0.8264 | 0.8211 | 0.8334 |
| 0.0833 | 1.75 | 800 | 0.1707 | 0.9647 | 0.8285 | 0.8244 | 0.8330 |
| 0.0759 | 1.86 | 850 | 0.1625 | 0.9664 | 0.8278 | 0.8285 | 0.8271 |
| 0.0459 | 1.97 | 900 | 0.1831 | 0.9620 | 0.8258 | 0.8200 | 0.8328 |
| 0.0726 | 2.07 | 950 | 0.1753 | 0.9625 | 0.8279 | 0.8287 | 0.8276 |
| 0.0369 | 2.18 | 1000 | 0.1871 | 0.9650 | 0.8300 | 0.8252 | 0.8362 |
| 0.0456 | 2.29 | 1050 | 0.1524 | 0.9683 | 0.8320 | 0.8278 | 0.8367 |
| 0.0371 | 2.4 | 1100 | 0.1857 | 0.9631 | 0.8280 | 0.8219 | 0.8353 |
| 0.0106 | 2.51 | 1150 | 0.1850 | 0.9661 | 0.8318 | 0.8274 | 0.8370 |
| 0.0173 | 2.62 | 1200 | 0.2055 | 0.9647 | 0.8310 | 0.8259 | 0.8374 |
| 0.036 | 2.73 | 1250 | 0.1699 | 0.9694 | 0.8311 | 0.8267 | 0.8358 |
| 0.0176 | 2.84 | 1300 | 0.1780 | 0.9691 | 0.8325 | 0.8274 | 0.8382 |
| 0.0444 | 2.95 | 1350 | 0.1918 | 0.9672 | 0.8319 | 0.8275 | 0.8371 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "roberta-base", "model-index": [{"name": "final_V1-roberta-text-classification-model", "results": []}]} | AmirlyPhd/final_V1-roberta-text-classification-model | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:03:16+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| final\_V1-roberta-text-classification-model
===========================================
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2641
* Accuracy: 0.9502
* F1: 0.8186
* Precision: 0.8164
* Recall: 0.8225
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-2b-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-2b-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-2b"} | PrunaAI/google-gemma-2b-HQQ-1bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-2b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:03:35+00:00 | [] | [] | TAGS
#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-2b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-2b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-2b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-2b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-2b"} | PrunaAI/google-gemma-2b-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-2b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:04:32+00:00 | [] | [] | TAGS
#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-2b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-2b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-2b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hossein0677/token_classification
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1014
- Validation Loss: 0.2498
- Train Precision: 0.6477
- Train Recall: 0.4617
- Train F1: 0.5391
- Train Accuracy: 0.9488
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1014 | 0.2498 | 0.6477 | 0.4617 | 0.5391 | 0.9488 | 0 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "hossein0677/token_classification", "results": []}]} | hossein0677/token_classification | null | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:05:46+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #token-classification #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| hossein0677/token\_classification
=================================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.1014
* Validation Loss: 0.2498
* Train Precision: 0.6477
* Train Recall: 0.4617
* Train F1: 0.5391
* Train Accuracy: 0.9488
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 636, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.0
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 636, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #distilbert #token-classification #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 636, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
| Tasks |Metric |Value | |Stderr|
|------------|--------|-----:|---|-----:|
|hellaswag_it|acc |0.4502|Β± |0.0052|
| |acc_norm|0.5994|Β± |0.0051|
|arc_it |acc |0.0813|Β± |0.0080|
| |acc_norm|0.4243|Β± |0.0145| | {"language": ["it"], "license": "apache-2.0", "datasets": ["seeweb/Seeweb-it-292-forLLM"]} | nonsonpratico/phi3-3.8-128k-italian | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"it",
"dataset:seeweb/Seeweb-it-292-forLLM",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:07:12+00:00 | [] | [
"it"
] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #it #dataset-seeweb/Seeweb-it-292-forLLM #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| [] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #it #dataset-seeweb/Seeweb-it-292-forLLM #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meditron-7b-dpo-full-sft-wo-medication_qa
This model is a fine-tuned version of [Minbyul/meditron-7b-wo-medication_qa-sft](https://huggingface.co/Minbyul/meditron-7b-wo-medication_qa-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6347
- Rewards/chosen: -0.0755
- Rewards/rejected: -0.3042
- Rewards/accuracies: 0.7812
- Rewards/margins: 0.2287
- Logps/rejected: -613.1549
- Logps/chosen: -395.2336
- Logits/rejected: -1.0640
- Logits/chosen: -1.1745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "llama2", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "alignment-handbook", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/meditron-7b-wo-medication_qa-sft", "model-index": [{"name": "meditron-7b-dpo-full-sft-wo-medication_qa", "results": []}]} | Minbyul/meditron-7b-dpo-full-sft-wo-medication_qa | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/meditron-7b-wo-medication_qa-sft",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:07:13+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/meditron-7b-wo-medication_qa-sft #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# meditron-7b-dpo-full-sft-wo-medication_qa
This model is a fine-tuned version of Minbyul/meditron-7b-wo-medication_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6347
- Rewards/chosen: -0.0755
- Rewards/rejected: -0.3042
- Rewards/accuracies: 0.7812
- Rewards/margins: 0.2287
- Logps/rejected: -613.1549
- Logps/chosen: -395.2336
- Logits/rejected: -1.0640
- Logits/chosen: -1.1745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# meditron-7b-dpo-full-sft-wo-medication_qa\n\nThis model is a fine-tuned version of Minbyul/meditron-7b-wo-medication_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6347\n- Rewards/chosen: -0.0755\n- Rewards/rejected: -0.3042\n- Rewards/accuracies: 0.7812\n- Rewards/margins: 0.2287\n- Logps/rejected: -613.1549\n- Logps/chosen: -395.2336\n- Logits/rejected: -1.0640\n- Logits/chosen: -1.1745",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/meditron-7b-wo-medication_qa-sft #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# meditron-7b-dpo-full-sft-wo-medication_qa\n\nThis model is a fine-tuned version of Minbyul/meditron-7b-wo-medication_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6347\n- Rewards/chosen: -0.0755\n- Rewards/rejected: -0.3042\n- Rewards/accuracies: 0.7812\n- Rewards/margins: 0.2287\n- Logps/rejected: -613.1549\n- Logps/chosen: -395.2336\n- Logits/rejected: -1.0640\n- Logits/chosen: -1.1745",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/7zg0x7v | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:07:42+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]} | nayan8625/llama3-16bit-finetuned | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:09:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #pytorch #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: codellama/CodeLlama-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: CodeLlamaTokenizer
is_llama_derived_model: true
hub_model_id: noeloco/modeltest1-dpo
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: noeloco/fizzbuzz-sft
type: alpaca
ds_type: json
hf_use_auth_token: true
push_dataset_to_hub: noeloco
val_set_size: 0.05
output_dir: ./lora-out
chat_template: chatml
rl: dpo
datasets:
- path: noeloco/fizzbuzz-dpo
split: train
data_files:
- /tmp/fizzbuzz-ft/datasets/training-set-dpo.json
#type:
# field_prompt: question
# field_chosen: chosen
# field_rejected: rejected
ds_type: json
#type: intel_apply_chatml
type: chatml.intel
hf_use_auth_token: true
push_dataset_to_hub: noeloco
val_set_size: 0.05
output_dir: ./lora-out
chat_template: chatml
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 16
lora_alpha: 8
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: runpod1
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug: true
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# modeltest1-dpo
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 222
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0 | {"license": "llama2", "library_name": "peft", "tags": ["axolotl", "dpo", "trl", "generated_from_trainer"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "modeltest1-dpo", "results": []}]} | noeloco/modeltest1-dpo | null | [
"peft",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"axolotl",
"dpo",
"trl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"4-bit",
"region:us"
] | null | 2024-04-29T13:09:38+00:00 | [] | [] | TAGS
#peft #pytorch #tensorboard #safetensors #llama #axolotl #dpo #trl #generated_from_trainer #base_model-codellama/CodeLlama-7b-hf #license-llama2 #4-bit #region-us
|
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
<details><summary>See axolotl config</summary>
axolotl version: '0.4.0'
</details><br>
# modeltest1-dpo
This model is a fine-tuned version of codellama/CodeLlama-7b-hf on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 222
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0 | [
"# modeltest1-dpo\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 10\n- training_steps: 222",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.1.2+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0"
] | [
"TAGS\n#peft #pytorch #tensorboard #safetensors #llama #axolotl #dpo #trl #generated_from_trainer #base_model-codellama/CodeLlama-7b-hf #license-llama2 #4-bit #region-us \n",
"# modeltest1-dpo\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 10\n- training_steps: 222",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.1.2+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "Jacaranda/kiswahili_pretrained_no_ext"} | Jacaranda/UlizaLlama_64_128 | null | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:Jacaranda/kiswahili_pretrained_no_ext",
"region:us"
] | null | 2024-04-29T13:10:10+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #llama #arxiv-1910.09700 #base_model-Jacaranda/kiswahili_pretrained_no_ext #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #safetensors #llama #arxiv-1910.09700 #base_model-Jacaranda/kiswahili_pretrained_no_ext #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-7b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-7b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-7b"} | PrunaAI/google-gemma-7b-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:10:39+00:00 | [] | [] | TAGS
#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
null | null |
# Strangemerges_32Neuralsynthesis-7B
Strangemerges_32Neuralsynthesis-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## π§© Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: Gille/StrangeMerges_32-7B-slerp
- model: Kukedlc/NeuralSynthesis-7B-v0.3
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Strangemerges_32Neuralsynthesis-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/Strangemerges_32Neuralsynthesis-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-29T13:10:42+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# Strangemerges_32Neuralsynthesis-7B
Strangemerges_32Neuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# Strangemerges_32Neuralsynthesis-7B\n\nStrangemerges_32Neuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# Strangemerges_32Neuralsynthesis-7B\n\nStrangemerges_32Neuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-7b-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-7b-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-7b"} | PrunaAI/google-gemma-7b-HQQ-1bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:10:58+00:00 | [] | [] | TAGS
#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-7b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-7b-it-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-7b-it-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-7b-it"} | PrunaAI/google-gemma-7b-it-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-7b-it",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:11:14+00:00 | [] | [] | TAGS
#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-7b-it #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-7b-it #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Meta-Llama-3-8B"} | PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"base_model:NousResearch/Meta-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:11:24+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-7b-it-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-7b-it-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-7b-it"} | PrunaAI/google-gemma-7b-it-HQQ-1bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-7b-it",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:11:26+00:00 | [] | [] | TAGS
#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-7b-it #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-7b-it #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Meta-Llama-3-8B"} | PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-1bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"base_model:NousResearch/Meta-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:11:42+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
null | null |
I do not own this model. This is uploaded purely for academic demonstrations.
Credit: [BD](https://twitter.com/br_d)
CivitAI: [link](https://civitai.com/models/121083/blazing-drive) | {"license": "creativeml-openrail-m"} | ironjr/BlazingDriveV13md | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-29T13:11:48+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
|
I do not own this model. This is uploaded purely for academic demonstrations.
Credit: BD
CivitAI: link | [] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-sft
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.2 | {"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemma-2b-sft", "results": []}]} | DuongTrongChi/gemma-2b-sft | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-04-29T13:14:01+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
|
# gemma-2b-sft
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.2 | [
"# gemma-2b-sft\n\nThis model is a fine-tuned version of google/gemma-2b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.17.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"# gemma-2b-sft\n\nThis model is a fine-tuned version of google/gemma-2b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.17.0\n- Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "microsoft/phi-2"} | TaufikT/code-task | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"region:us"
] | null | 2024-04-29T13:14:14+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-microsoft/phi-2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-microsoft/phi-2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-2b-it-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-2b-it-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-2b-it"} | PrunaAI/google-gemma-2b-it-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/gemma-2b-it",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:15:04+00:00 | [] | [] | TAGS
#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-2b-it #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gemma #text-generation #pruna-ai #base_model-google/gemma-2b-it #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-gemma-2b-it-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-gemma-2b-it-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/gemma-2b-it"} | PrunaAI/google-gemma-2b-it-HQQ-1bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"conversational",
"base_model:google/gemma-2b-it",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:15:55+00:00 | [] | [] | TAGS
#transformers #gemma #text-generation #pruna-ai #conversational #base_model-google/gemma-2b-it #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gemma #text-generation #pruna-ai #conversational #base_model-google/gemma-2b-it #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/gemma-2b-it installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-2b-it before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base_ter
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9640
- Bleu: 0.0101
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.1521 | 1.0 | 2420 | 1.9929 | 0.0101 | 19.0 |
| 2.0942 | 2.0 | 4840 | 1.9640 | 0.0101 | 19.0 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "google-t5/t5-base", "model-index": [{"name": "t5-base_ter", "results": []}]} | lesha-grishchenko/t5-base_ter | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:16:00+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| t5-base\_ter
============
This model is a fine-tuned version of google-t5/t5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9640
* Bleu: 0.0101
* Gen Len: 19.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Uploaded model
- **Developed by:** ayushgs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | ayushgs/llama-3-8b-mh | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:16:02+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: ayushgs
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: ayushgs\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: ayushgs\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi2SFT
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 50
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/phi-2", "model-index": [{"name": "Phi2SFT", "results": []}]} | Abinayasankar/Phi2SFT | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-29T13:16:19+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-2 #license-mit #region-us
|
# Phi2SFT
This model is a fine-tuned version of microsoft/phi-2 on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 50
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# Phi2SFT\n\nThis model is a fine-tuned version of microsoft/phi-2 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- training_steps: 50",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-2 #license-mit #region-us \n",
"# Phi2SFT\n\nThis model is a fine-tuned version of microsoft/phi-2 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- training_steps: 50",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Abinayasankar/Phi2-SFTFinetuned | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:16:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | # dolphin-2.9-llama3-8b-GGUF
- Original model: [dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsβ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/dolphin-2.9-llama3-8b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/dolphin-2.9-llama3-8b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/dolphin-2.9-llama3-8b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/dolphin-2.9-llama3-8b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 β Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: dolphin-2.9-llama3-8b
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dolphin 2.9 Llama 3 8b π¬
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
Discord: https://discord.gg/8fbBeC7ZGx
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
My appreciation for the sponsors of Dolphin 2.9:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node
This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.
It took 2.5 days on 8x L40S provided by Crusoe Cloud
This model was trained FFT on all parameters, using ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
load_in_8bit: false
load_in_4bit: false
strict: false
model_config:
datasets:
- path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Ultrachat200kunfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/SystemConversations.jsonl
type: sharegpt
conversation: chatml
chat_template: chatml
dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy
val_set_size: 0.0002
output_dir: ./out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
gradient_accumulation_steps: 4
micro_batch_size: 3
num_epochs: 3
logging_steps: 1
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
wandb_project: dolphin-2.9-mixtral-8x22b
wandb_watch:
wandb_run_id:
wandb_log_model:
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
saves_per_epoch: 4
save_total_limit: 2
save_steps:
evals_per_epoch: 4
eval_sample_packing: false
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
eos_token: "<|im_end|>"
pad_token: "<|end_of_text|>"
tokens:
- "<|im_start|>"
- "<|im_end|>"
```
</details><br>
## Quants
GGUF : https://huggingface.co/QuantFactory/dolphin-2.9-llama3-8b-GGUF
GGUF with imatrix: https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF
Exllamav2: https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-exl2
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 7
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
| :-: | :--: | :-: |
| 1.146 | 0.0005 | 1 | 1.1064 |
| 0.6962 | 0.2501 | 555 | 0.6636 |
| 0.6857 | 0.5001 | 1110 | 0.6503 |
| 0.6592 | 0.7502 | 1665 | 0.6419 |
| 0.6465 | 1.0002 | 2220 | 0.6317 |
| 0.5295 | 1.2395 | 2775 | 0.6408 |
| 0.5302 | 1.4895 | 3330 | 0.6351 |
| 0.5188 | 1.7396 | 3885 | 0.6227 |
| 0.521 | 1.9896 | 4440 | 0.6168 |
| 0.3968 | 2.2289 | 4995 | 0.6646 |
| 0.3776 | 2.4789 | 5550 | 0.6619 |
| 0.3983 | 2.7290 | 6105 | 0.6602 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
<!-- original-model-card end -->
| {"license": "other", "tags": ["generated_from_trainer", "axolotl", "GGUF"], "datasets": ["cognitivecomputations/Dolphin-2.9", "teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "HuggingFaceH4/ultrachat_200k", "microsoft/orca-math-word-problems-200k", "abacusai/SystemChat-1.1", "Locutusque/function-calling-chatml", "internlm/Agent-FLAN"], "base_model": "meta-llama/Meta-Llama-3-8B", "quantized_by": "andrijdavid", "model-index": [{"name": "out", "results": []}]} | LiteLLMs/dolphin-2.9-llama3-8b-GGUF | null | [
"gguf",
"generated_from_trainer",
"axolotl",
"GGUF",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2024-04-29T13:17:04+00:00 | [] | [] | TAGS
#gguf #generated_from_trainer #axolotl #GGUF #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us
| # dolphin-2.9-llama3-8b-GGUF
- Original model: dolphin-2.9-llama3-8b
## Description
This repo contains GGUF format model files for dolphin-2.9-llama3-8b.
### About GGUF
GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* URL. This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* text-generation-webui, Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* Ollama Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsβ
* KoboldCpp, A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* GPT4All, This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* LM Studio An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* LoLLMS Web UI. A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* URL, An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* llama-cpp-python, A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* candle, A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* ctransformers, A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* localGPT An open-source initiative enabling private conversations with documents.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
## How to download GGUF files
Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* URL
### In 'text-generation-webui'
Under Download Model, you can enter the model repo: LiteLLMs/dolphin-2.9-llama3-8b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-URL.
Then click Download.
### On the command line, including multiple files at once
I recommend using the 'huggingface-hub' Python library:
Then you can download any individual model file to the current directory, at high speed, with a command like this:
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf_transfer':
And set environment variable 'HF_HUB_ENABLE_HF_TRANSFER' to '1':
Windows Command Line users: You can set the environment variable by running 'set HF_HUB_ENABLE_HF_TRANSFER=1' before the download command.
</details>
## Example 'URL' command
Make sure you are using 'URL' from commit d0cee0d or later.
Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change '-c 8192' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'
For other parameters and how to use them, please refer to the URL documentation
## How to run in 'text-generation-webui'
Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 β Model URL.
## How to run from Python code
You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: llama-cpp-python docs.
#### First install the package
Run one of the following commands, according to your system:
#### Simple llama-cpp-python example code
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* LangChain + llama-cpp-python
* LangChain + ctransformers
# Original model card: dolphin-2.9-llama3-8b
# Dolphin 2.9 Llama 3 8b
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
Discord: URL
<img src="URL width="600" />
My appreciation for the sponsors of Dolphin 2.9:
- Crusoe Cloud - provided excellent on-demand 10xL40S node
This model is based on Llama-3-8b, and is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT
The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.
It took 2.5 days on 8x L40S provided by Crusoe Cloud
This model was trained FFT on all parameters, using ChatML prompt template format.
example:
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. URL You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
<details><summary>See axolotl config</summary>
axolotl version: '0.4.0'
</details><br>
## Quants
GGUF : URL
GGUF with imatrix: URL
Exllamav2: URL
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 7
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
| :-: | :--: | :-: |
| 1.146 | 0.0005 | 1 | 1.1064 |
| 0.6962 | 0.2501 | 555 | 0.6636 |
| 0.6857 | 0.5001 | 1110 | 0.6503 |
| 0.6592 | 0.7502 | 1665 | 0.6419 |
| 0.6465 | 1.0002 | 2220 | 0.6317 |
| 0.5295 | 1.2395 | 2775 | 0.6408 |
| 0.5302 | 1.4895 | 3330 | 0.6351 |
| 0.5188 | 1.7396 | 3885 | 0.6227 |
| 0.521 | 1.9896 | 4440 | 0.6168 |
| 0.3968 | 2.2289 | 4995 | 0.6646 |
| 0.3776 | 2.4789 | 5550 | 0.6619 |
| 0.3983 | 2.7290 | 6105 | 0.6602 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"# dolphin-2.9-llama3-8b-GGUF\n- Original model: dolphin-2.9-llama3-8b",
"## Description\n\nThis repo contains GGUF format model files for dolphin-2.9-llama3-8b.",
"### About GGUF\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n* URL. This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.\n* text-generation-webui, Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.\n* Ollama Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsβ\n* KoboldCpp, A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.\n* GPT4All, This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.\n* LM Studio An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.\n* LoLLMS Web UI. A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.\n* URL, An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.\n* llama-cpp-python, A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.\n* candle, A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.\n* ctransformers, A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.\n* localGPT An open-source initiative enabling private conversations with documents.",
"## Explanation of quantisation methods\n<details>\n <summary>Click to see details</summary>\nThe new methods available are:\n\n* GGML_TYPE_Q2_K - \"type-1\" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)\n* GGML_TYPE_Q3_K - \"type-0\" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.\n* GGML_TYPE_Q4_K - \"type-1\" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.\n* GGML_TYPE_Q5_K - \"type-1\" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw\n* GGML_TYPE_Q6_K - \"type-0\" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.\n</details>",
"## How to download GGUF files\n\nNote for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.\n\nThe following clients/libraries will automatically download models for you, providing a list of available models to choose from:\n\n* LM Studio\n* LoLLMS Web UI\n* URL",
"### In 'text-generation-webui'\n\nUnder Download Model, you can enter the model repo: LiteLLMs/dolphin-2.9-llama3-8b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-URL.\n\nThen click Download.",
"### On the command line, including multiple files at once\n\nI recommend using the 'huggingface-hub' Python library:\n\n\n\nThen you can download any individual model file to the current directory, at high speed, with a command like this:\n\n\n\n<details>\n <summary>More advanced huggingface-cli download usage (click to read)</summary>\n\nYou can also download multiple files at once with a pattern:\n\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf_transfer':\n\n\n\nAnd set environment variable 'HF_HUB_ENABLE_HF_TRANSFER' to '1':\n\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF_HUB_ENABLE_HF_TRANSFER=1' before the download command.\n</details>",
"## Example 'URL' command\n\nMake sure you are using 'URL' from commit d0cee0d or later.\n\n\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\nChange '-c 8192' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\nIf you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'\n\nFor other parameters and how to use them, please refer to the URL documentation",
"## How to run in 'text-generation-webui'\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 β Model URL.",
"## How to run from Python code\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.",
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code",
"## How to use with LangChain\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers",
"# Original model card: dolphin-2.9-llama3-8b",
"# Dolphin 2.9 Llama 3 8b \n\nCurated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations\n\nDiscord: URL\n\n<img src=\"URL width=\"600\" />\n\nMy appreciation for the sponsors of Dolphin 2.9:\n- Crusoe Cloud - provided excellent on-demand 10xL40S node\n\nThis model is based on Llama-3-8b, and is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT\n\nThe base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.\n\nIt took 2.5 days on 8x L40S provided by Crusoe Cloud\n\nThis model was trained FFT on all parameters, using ChatML prompt template format.\n\nexample:\n\n\n\nDolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.\n\nDolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. URL You are responsible for any content you create using this model. Enjoy responsibly.\n\nDolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>\n<details><summary>See axolotl config</summary>\n\naxolotl version: '0.4.0'\n\n\n</details><br>",
"## Quants\n\nGGUF : URL\n\nGGUF with imatrix: URL\n\nExllamav2: URL",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 96\n- total_eval_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 7\n- num_epochs: 3",
"### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n| :-: | :--: | :-: |\n| 1.146 | 0.0005 | 1 | 1.1064 |\n| 0.6962 | 0.2501 | 555 | 0.6636 |\n| 0.6857 | 0.5001 | 1110 | 0.6503 |\n| 0.6592 | 0.7502 | 1665 | 0.6419 |\n| 0.6465 | 1.0002 | 2220 | 0.6317 |\n| 0.5295 | 1.2395 | 2775 | 0.6408 |\n| 0.5302 | 1.4895 | 3330 | 0.6351 |\n| 0.5188 | 1.7396 | 3885 | 0.6227 |\n| 0.521 | 1.9896 | 4440 | 0.6168 |\n| 0.3968 | 2.2289 | 4995 | 0.6646 |\n| 0.3776 | 2.4789 | 5550 | 0.6619 |\n| 0.3983 | 2.7290 | 6105 | 0.6602 |",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#gguf #generated_from_trainer #axolotl #GGUF #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us \n",
"# dolphin-2.9-llama3-8b-GGUF\n- Original model: dolphin-2.9-llama3-8b",
"## Description\n\nThis repo contains GGUF format model files for dolphin-2.9-llama3-8b.",
"### About GGUF\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n* URL. This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.\n* text-generation-webui, Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.\n* Ollama Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsβ\n* KoboldCpp, A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.\n* GPT4All, This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.\n* LM Studio An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.\n* LoLLMS Web UI. A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.\n* URL, An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.\n* llama-cpp-python, A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.\n* candle, A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.\n* ctransformers, A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.\n* localGPT An open-source initiative enabling private conversations with documents.",
"## Explanation of quantisation methods\n<details>\n <summary>Click to see details</summary>\nThe new methods available are:\n\n* GGML_TYPE_Q2_K - \"type-1\" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)\n* GGML_TYPE_Q3_K - \"type-0\" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.\n* GGML_TYPE_Q4_K - \"type-1\" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.\n* GGML_TYPE_Q5_K - \"type-1\" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw\n* GGML_TYPE_Q6_K - \"type-0\" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.\n</details>",
"## How to download GGUF files\n\nNote for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.\n\nThe following clients/libraries will automatically download models for you, providing a list of available models to choose from:\n\n* LM Studio\n* LoLLMS Web UI\n* URL",
"### In 'text-generation-webui'\n\nUnder Download Model, you can enter the model repo: LiteLLMs/dolphin-2.9-llama3-8b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-URL.\n\nThen click Download.",
"### On the command line, including multiple files at once\n\nI recommend using the 'huggingface-hub' Python library:\n\n\n\nThen you can download any individual model file to the current directory, at high speed, with a command like this:\n\n\n\n<details>\n <summary>More advanced huggingface-cli download usage (click to read)</summary>\n\nYou can also download multiple files at once with a pattern:\n\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf_transfer':\n\n\n\nAnd set environment variable 'HF_HUB_ENABLE_HF_TRANSFER' to '1':\n\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF_HUB_ENABLE_HF_TRANSFER=1' before the download command.\n</details>",
"## Example 'URL' command\n\nMake sure you are using 'URL' from commit d0cee0d or later.\n\n\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\nChange '-c 8192' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\nIf you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'\n\nFor other parameters and how to use them, please refer to the URL documentation",
"## How to run in 'text-generation-webui'\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 β Model URL.",
"## How to run from Python code\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.",
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code",
"## How to use with LangChain\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers",
"# Original model card: dolphin-2.9-llama3-8b",
"# Dolphin 2.9 Llama 3 8b \n\nCurated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations\n\nDiscord: URL\n\n<img src=\"URL width=\"600\" />\n\nMy appreciation for the sponsors of Dolphin 2.9:\n- Crusoe Cloud - provided excellent on-demand 10xL40S node\n\nThis model is based on Llama-3-8b, and is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT\n\nThe base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.\n\nIt took 2.5 days on 8x L40S provided by Crusoe Cloud\n\nThis model was trained FFT on all parameters, using ChatML prompt template format.\n\nexample:\n\n\n\nDolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.\n\nDolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. URL You are responsible for any content you create using this model. Enjoy responsibly.\n\nDolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>\n<details><summary>See axolotl config</summary>\n\naxolotl version: '0.4.0'\n\n\n</details><br>",
"## Quants\n\nGGUF : URL\n\nGGUF with imatrix: URL\n\nExllamav2: URL",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 96\n- total_eval_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 7\n- num_epochs: 3",
"### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n| :-: | :--: | :-: |\n| 1.146 | 0.0005 | 1 | 1.1064 |\n| 0.6962 | 0.2501 | 555 | 0.6636 |\n| 0.6857 | 0.5001 | 1110 | 0.6503 |\n| 0.6592 | 0.7502 | 1665 | 0.6419 |\n| 0.6465 | 1.0002 | 2220 | 0.6317 |\n| 0.5295 | 1.2395 | 2775 | 0.6408 |\n| 0.5302 | 1.4895 | 3330 | 0.6351 |\n| 0.5188 | 1.7396 | 3885 | 0.6227 |\n| 0.521 | 1.9896 | 4440 | 0.6168 |\n| 0.3968 | 2.2289 | 4995 | 0.6646 |\n| 0.3776 | 2.4789 | 5550 | 0.6619 |\n| 0.3983 | 2.7290 | 6105 | 0.6602 |",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/uoee38m | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:18:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2-705M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.0336 | 1.0 | 3 | 7.3770 |
| 6.2535 | 2.0 | 6 | 6.3128 |
| 5.6213 | 3.0 | 9 | 5.6716 |
| 4.8242 | 4.0 | 12 | 5.1521 |
| 4.6266 | 5.0 | 15 | 4.9789 |
| 4.4097 | 6.0 | 18 | 4.7306 |
| 4.0358 | 7.0 | 21 | 4.5332 |
| 4.0027 | 8.0 | 24 | 4.4014 |
| 3.8638 | 9.0 | 27 | 4.1175 |
| 3.5414 | 10.0 | 30 | 4.0355 |
| 3.4701 | 11.0 | 33 | 3.8834 |
| 3.4822 | 12.0 | 36 | 3.8336 |
| 3.0602 | 13.0 | 39 | 3.7213 |
| 3.1109 | 14.0 | 42 | 3.7379 |
| 2.9087 | 15.0 | 45 | 3.7389 |
| 2.7124 | 16.0 | 48 | 3.6220 |
| 2.5867 | 17.0 | 51 | 3.7192 |
| 2.4577 | 18.0 | 54 | 3.5953 |
| 2.279 | 19.0 | 57 | 3.7648 |
| 2.3218 | 20.0 | 60 | 3.6046 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "GPT2-705M", "results": []}]} | ninagroot/GPT2-705M-RUN4 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:18:59+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GPT2-705M
=========
This model is a fine-tuned version of [](URL on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.6046
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00025
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.1
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00025\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00025\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/qubvel-hf-co/tuning-sota-cppe5/runs/w975f3w4)
# sensetime-deformable-detr-finetuned-10k-cppe5-reproduce
This model is a fine-tuned version of [SenseTime/deformable-detr](https://huggingface.co/SenseTime/deformable-detr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0727
- Map: 0.3471
- Map 50: 0.6376
- Map 75: 0.3269
- Map Small: 0.2239
- Map Medium: 0.2687
- Map Large: 0.5542
- Mar 1: 0.3146
- Mar 10: 0.485
- Mar 100: 0.5045
- Mar Small: 0.3033
- Mar Medium: 0.4315
- Mar Large: 0.7048
- Map Coverall: 0.556
- Mar 100 Coverall: 0.6671
- Map Face Shield: 0.3346
- Mar 100 Face Shield: 0.4873
- Map Gloves: 0.2879
- Mar 100 Gloves: 0.4625
- Map Goggles: 0.2348
- Mar 100 Goggles: 0.44
- Map Mask: 0.3221
- Mar 100 Mask: 0.4653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-------:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| 11.6745 | 0.9953 | 106 | 1.7743 | 0.0167 | 0.0443 | 0.0098 | 0.0113 | 0.0253 | 0.0122 | 0.0331 | 0.1258 | 0.1673 | 0.0575 | 0.155 | 0.178 | 0.0056 | 0.2622 | 0.008 | 0.1329 | 0.0044 | 0.1192 | 0.0001 | 0.0292 | 0.0654 | 0.2929 |
| 1.5556 | 2.0 | 213 | 1.5655 | 0.0293 | 0.0676 | 0.0249 | 0.0265 | 0.0367 | 0.0521 | 0.042 | 0.1789 | 0.2423 | 0.1235 | 0.2148 | 0.3336 | 0.0138 | 0.3158 | 0.0073 | 0.2038 | 0.0123 | 0.2844 | 0.0001 | 0.0185 | 0.1131 | 0.3889 |
| 1.4143 | 2.9953 | 319 | 1.4570 | 0.074 | 0.1565 | 0.0634 | 0.0315 | 0.0468 | 0.0894 | 0.1088 | 0.2778 | 0.3228 | 0.1222 | 0.2836 | 0.3768 | 0.2084 | 0.5955 | 0.013 | 0.2203 | 0.0184 | 0.3112 | 0.0033 | 0.1246 | 0.1267 | 0.3627 |
| 1.2557 | 4.0 | 426 | 1.3799 | 0.1076 | 0.212 | 0.099 | 0.0439 | 0.0618 | 0.1269 | 0.1323 | 0.3206 | 0.3609 | 0.1514 | 0.3238 | 0.4669 | 0.3272 | 0.6257 | 0.0165 | 0.257 | 0.0337 | 0.3634 | 0.0069 | 0.16 | 0.1537 | 0.3987 |
| 1.1941 | 4.9953 | 532 | 1.3209 | 0.1144 | 0.2176 | 0.1087 | 0.0438 | 0.0713 | 0.151 | 0.1434 | 0.3609 | 0.4018 | 0.2001 | 0.3457 | 0.564 | 0.3425 | 0.6644 | 0.0144 | 0.2949 | 0.0407 | 0.3665 | 0.0154 | 0.2585 | 0.1591 | 0.4244 |
| 1.155 | 6.0 | 639 | 1.2990 | 0.1515 | 0.2906 | 0.1425 | 0.0672 | 0.1129 | 0.1945 | 0.1753 | 0.3935 | 0.4237 | 0.1877 | 0.368 | 0.598 | 0.4158 | 0.6707 | 0.0406 | 0.3203 | 0.0925 | 0.3951 | 0.0189 | 0.3138 | 0.1894 | 0.4187 |
| 1.1215 | 6.9953 | 745 | 1.2747 | 0.1571 | 0.309 | 0.143 | 0.0696 | 0.125 | 0.1889 | 0.1724 | 0.3909 | 0.43 | 0.2033 | 0.3851 | 0.604 | 0.4395 | 0.6743 | 0.0498 | 0.381 | 0.0871 | 0.4036 | 0.018 | 0.2692 | 0.1911 | 0.4218 |
| 1.0717 | 8.0 | 852 | 1.2178 | 0.1571 | 0.3227 | 0.1375 | 0.0901 | 0.1195 | 0.2164 | 0.198 | 0.4041 | 0.4433 | 0.2476 | 0.4 | 0.5814 | 0.4163 | 0.6811 | 0.0407 | 0.3582 | 0.0893 | 0.4116 | 0.049 | 0.3246 | 0.1903 | 0.4409 |
| 1.0565 | 8.9953 | 958 | 1.2809 | 0.1655 | 0.3272 | 0.1587 | 0.091 | 0.1352 | 0.2086 | 0.2214 | 0.4057 | 0.4395 | 0.2224 | 0.3794 | 0.6548 | 0.3741 | 0.6495 | 0.0995 | 0.4316 | 0.098 | 0.3714 | 0.0485 | 0.3277 | 0.2076 | 0.4173 |
| 1.0112 | 10.0 | 1065 | 1.1774 | 0.2008 | 0.3844 | 0.2 | 0.0948 | 0.1963 | 0.2942 | 0.2258 | 0.4368 | 0.4664 | 0.273 | 0.4293 | 0.6294 | 0.4777 | 0.6856 | 0.0681 | 0.343 | 0.1224 | 0.4321 | 0.09 | 0.3938 | 0.246 | 0.4773 |
| 0.9978 | 10.9953 | 1171 | 1.1756 | 0.2299 | 0.4406 | 0.2158 | 0.0987 | 0.208 | 0.3558 | 0.2403 | 0.4529 | 0.4784 | 0.2767 | 0.4338 | 0.6759 | 0.474 | 0.6856 | 0.1359 | 0.4127 | 0.1528 | 0.4388 | 0.1218 | 0.3923 | 0.2652 | 0.4627 |
| 0.9543 | 12.0 | 1278 | 1.1263 | 0.2416 | 0.459 | 0.226 | 0.129 | 0.2064 | 0.3378 | 0.2497 | 0.4603 | 0.4893 | 0.3098 | 0.439 | 0.6639 | 0.51 | 0.6892 | 0.133 | 0.4228 | 0.1824 | 0.4393 | 0.116 | 0.4385 | 0.2665 | 0.4569 |
| 0.9578 | 12.9953 | 1384 | 1.1217 | 0.2622 | 0.4835 | 0.2467 | 0.1248 | 0.2321 | 0.4226 | 0.2676 | 0.4575 | 0.4884 | 0.3023 | 0.4572 | 0.661 | 0.5304 | 0.6977 | 0.172 | 0.4139 | 0.1784 | 0.4603 | 0.1386 | 0.4062 | 0.2917 | 0.464 |
| 0.9196 | 14.0 | 1491 | 1.1083 | 0.26 | 0.4885 | 0.2469 | 0.1338 | 0.2185 | 0.3915 | 0.2684 | 0.4657 | 0.4912 | 0.2804 | 0.4423 | 0.6744 | 0.5478 | 0.7005 | 0.1698 | 0.4684 | 0.1808 | 0.4246 | 0.1119 | 0.4 | 0.2895 | 0.4627 |
| 0.8938 | 14.9953 | 1597 | 1.1487 | 0.2506 | 0.4849 | 0.2305 | 0.1269 | 0.2174 | 0.3739 | 0.2597 | 0.4527 | 0.4777 | 0.2813 | 0.4375 | 0.6726 | 0.5043 | 0.6748 | 0.1642 | 0.462 | 0.1753 | 0.4321 | 0.125 | 0.3785 | 0.2842 | 0.4413 |
| 0.8907 | 16.0 | 1704 | 1.1091 | 0.2545 | 0.4905 | 0.2477 | 0.1267 | 0.2276 | 0.392 | 0.2634 | 0.4574 | 0.4881 | 0.3075 | 0.4439 | 0.6852 | 0.524 | 0.695 | 0.1588 | 0.4443 | 0.188 | 0.4634 | 0.1298 | 0.3815 | 0.2717 | 0.4564 |
| 0.8669 | 16.9953 | 1810 | 1.0938 | 0.2838 | 0.5311 | 0.274 | 0.1446 | 0.2359 | 0.4327 | 0.2853 | 0.4676 | 0.49 | 0.3005 | 0.444 | 0.6873 | 0.5439 | 0.6905 | 0.2054 | 0.457 | 0.2177 | 0.4487 | 0.1542 | 0.4046 | 0.2976 | 0.4493 |
| 0.8495 | 18.0 | 1917 | 1.0866 | 0.2755 | 0.5134 | 0.2554 | 0.1362 | 0.2355 | 0.4541 | 0.2681 | 0.4548 | 0.4799 | 0.2886 | 0.4362 | 0.6823 | 0.5533 | 0.6995 | 0.1514 | 0.4418 | 0.2165 | 0.4598 | 0.159 | 0.3492 | 0.2974 | 0.4493 |
| 0.8421 | 18.9953 | 2023 | 1.0856 | 0.2719 | 0.5259 | 0.2519 | 0.1366 | 0.2341 | 0.4521 | 0.28 | 0.4539 | 0.4769 | 0.3123 | 0.4265 | 0.6567 | 0.5057 | 0.6459 | 0.1693 | 0.4418 | 0.2233 | 0.4629 | 0.1574 | 0.3677 | 0.304 | 0.4662 |
| 0.813 | 20.0 | 2130 | 1.0726 | 0.2846 | 0.5348 | 0.2736 | 0.1497 | 0.2304 | 0.4655 | 0.2851 | 0.4696 | 0.4932 | 0.3084 | 0.4391 | 0.6959 | 0.5259 | 0.6586 | 0.1864 | 0.5051 | 0.2468 | 0.4406 | 0.1622 | 0.3969 | 0.3018 | 0.4649 |
| 0.8215 | 20.9953 | 2236 | 1.0397 | 0.2966 | 0.5453 | 0.2992 | 0.1696 | 0.2338 | 0.4806 | 0.2887 | 0.4835 | 0.5056 | 0.3419 | 0.4607 | 0.694 | 0.5695 | 0.6973 | 0.1967 | 0.5051 | 0.2348 | 0.4647 | 0.1719 | 0.3815 | 0.3103 | 0.4796 |
| 0.7876 | 22.0 | 2343 | 1.0477 | 0.2908 | 0.5476 | 0.2759 | 0.1466 | 0.2439 | 0.4577 | 0.287 | 0.4747 | 0.5011 | 0.3174 | 0.4614 | 0.6895 | 0.5722 | 0.6959 | 0.1964 | 0.4848 | 0.2358 | 0.4656 | 0.1462 | 0.3892 | 0.3034 | 0.4698 |
| 0.8012 | 22.9953 | 2449 | 1.0780 | 0.2829 | 0.5255 | 0.2685 | 0.137 | 0.2255 | 0.4389 | 0.2786 | 0.4624 | 0.4898 | 0.2655 | 0.44 | 0.689 | 0.5429 | 0.6676 | 0.2043 | 0.4772 | 0.2418 | 0.4616 | 0.1156 | 0.3646 | 0.3101 | 0.4782 |
| 0.7738 | 24.0 | 2556 | 1.0447 | 0.3086 | 0.5717 | 0.296 | 0.18 | 0.2464 | 0.4829 | 0.286 | 0.4777 | 0.5067 | 0.3497 | 0.4605 | 0.6955 | 0.5613 | 0.6815 | 0.2118 | 0.4797 | 0.2567 | 0.4679 | 0.1977 | 0.4123 | 0.3156 | 0.492 |
| 0.7738 | 24.9953 | 2662 | 1.0471 | 0.3099 | 0.5827 | 0.2977 | 0.1699 | 0.2539 | 0.4816 | 0.2908 | 0.465 | 0.4846 | 0.3136 | 0.4363 | 0.6683 | 0.5595 | 0.6761 | 0.2114 | 0.4468 | 0.2579 | 0.4571 | 0.2039 | 0.3846 | 0.3166 | 0.4582 |
| 0.7549 | 26.0 | 2769 | 1.0618 | 0.3113 | 0.5754 | 0.289 | 0.1962 | 0.2495 | 0.4786 | 0.2938 | 0.4705 | 0.4954 | 0.2923 | 0.4491 | 0.6872 | 0.5627 | 0.6802 | 0.2508 | 0.4899 | 0.2579 | 0.45 | 0.1777 | 0.3954 | 0.3074 | 0.4613 |
| 0.7715 | 26.9953 | 2875 | 1.0328 | 0.313 | 0.5867 | 0.2898 | 0.1915 | 0.2419 | 0.4874 | 0.2935 | 0.4789 | 0.5051 | 0.3092 | 0.4417 | 0.7034 | 0.5449 | 0.6815 | 0.2329 | 0.4861 | 0.26 | 0.4741 | 0.1915 | 0.4 | 0.3356 | 0.484 |
| 0.7386 | 28.0 | 2982 | 1.0318 | 0.3182 | 0.5894 | 0.2969 | 0.2087 | 0.2584 | 0.4936 | 0.3043 | 0.4905 | 0.5203 | 0.3489 | 0.463 | 0.6977 | 0.5581 | 0.6761 | 0.2404 | 0.5 | 0.2663 | 0.4871 | 0.1996 | 0.4431 | 0.3264 | 0.4951 |
| 0.7388 | 28.9953 | 3088 | 1.0309 | 0.3151 | 0.5908 | 0.2953 | 0.1714 | 0.2506 | 0.4845 | 0.299 | 0.4833 | 0.507 | 0.3136 | 0.4511 | 0.7004 | 0.5507 | 0.6874 | 0.2508 | 0.4937 | 0.2607 | 0.4696 | 0.191 | 0.4138 | 0.3224 | 0.4707 |
| 0.7092 | 30.0 | 3195 | 1.0388 | 0.3165 | 0.5937 | 0.2909 | 0.1496 | 0.2446 | 0.5126 | 0.2976 | 0.4785 | 0.5024 | 0.2835 | 0.4463 | 0.6904 | 0.5549 | 0.6838 | 0.2498 | 0.4962 | 0.2718 | 0.467 | 0.2004 | 0.4062 | 0.3053 | 0.4591 |
| 0.7114 | 30.9953 | 3301 | 1.0427 | 0.3102 | 0.5886 | 0.286 | 0.1976 | 0.2561 | 0.4823 | 0.2985 | 0.4676 | 0.4946 | 0.2994 | 0.445 | 0.7112 | 0.5535 | 0.6806 | 0.2505 | 0.4975 | 0.2679 | 0.4621 | 0.167 | 0.3769 | 0.3123 | 0.456 |
| 0.7032 | 32.0 | 3408 | 1.0404 | 0.3168 | 0.5916 | 0.3 | 0.2018 | 0.2554 | 0.5038 | 0.2933 | 0.4776 | 0.5102 | 0.3669 | 0.4527 | 0.7007 | 0.5593 | 0.6887 | 0.244 | 0.4823 | 0.2684 | 0.4853 | 0.1801 | 0.4123 | 0.332 | 0.4822 |
| 0.7006 | 32.9953 | 3514 | 1.0246 | 0.3287 | 0.5948 | 0.3158 | 0.21 | 0.2597 | 0.5103 | 0.304 | 0.4967 | 0.522 | 0.3417 | 0.4715 | 0.7227 | 0.5832 | 0.7014 | 0.2496 | 0.519 | 0.2767 | 0.475 | 0.1936 | 0.4308 | 0.3405 | 0.484 |
| 0.6886 | 34.0 | 3621 | 1.0414 | 0.3186 | 0.5938 | 0.2917 | 0.1831 | 0.2501 | 0.5168 | 0.2984 | 0.4753 | 0.5011 | 0.2782 | 0.4546 | 0.689 | 0.5766 | 0.7086 | 0.2695 | 0.5089 | 0.2707 | 0.4629 | 0.1621 | 0.3738 | 0.3141 | 0.4511 |
| 0.7005 | 34.9953 | 3727 | 1.0315 | 0.3403 | 0.6141 | 0.3297 | 0.1953 | 0.264 | 0.5513 | 0.3007 | 0.4799 | 0.5053 | 0.2818 | 0.4534 | 0.7013 | 0.5757 | 0.6896 | 0.3081 | 0.5063 | 0.268 | 0.4621 | 0.2108 | 0.3908 | 0.339 | 0.4778 |
| 0.6736 | 36.0 | 3834 | 1.0174 | 0.3302 | 0.5993 | 0.3115 | 0.2041 | 0.263 | 0.5133 | 0.3003 | 0.4861 | 0.5088 | 0.2864 | 0.4674 | 0.69 | 0.5769 | 0.6973 | 0.2593 | 0.4684 | 0.278 | 0.492 | 0.2139 | 0.4231 | 0.323 | 0.4636 |
| 0.6713 | 36.9953 | 3940 | 1.0260 | 0.3312 | 0.5975 | 0.3118 | 0.2167 | 0.2652 | 0.5174 | 0.3112 | 0.4891 | 0.5127 | 0.3276 | 0.4565 | 0.7097 | 0.5684 | 0.6914 | 0.2784 | 0.5063 | 0.2671 | 0.4603 | 0.2226 | 0.4354 | 0.3192 | 0.4702 |
| 0.6572 | 38.0 | 4047 | 1.0167 | 0.3431 | 0.6228 | 0.3377 | 0.2027 | 0.2728 | 0.5576 | 0.3079 | 0.4877 | 0.5142 | 0.3312 | 0.4538 | 0.7063 | 0.5705 | 0.6811 | 0.2904 | 0.5063 | 0.2847 | 0.4817 | 0.2318 | 0.4169 | 0.3381 | 0.4849 |
| 0.6608 | 38.9953 | 4153 | 1.0307 | 0.3428 | 0.6126 | 0.3359 | 0.21 | 0.2775 | 0.5477 | 0.3043 | 0.4884 | 0.515 | 0.314 | 0.4668 | 0.7124 | 0.5706 | 0.691 | 0.3056 | 0.5367 | 0.2919 | 0.479 | 0.2123 | 0.3923 | 0.3336 | 0.476 |
| 0.63 | 40.0 | 4260 | 1.0264 | 0.3443 | 0.6177 | 0.3276 | 0.2114 | 0.2806 | 0.5421 | 0.3121 | 0.4916 | 0.5202 | 0.3188 | 0.4705 | 0.7166 | 0.5783 | 0.691 | 0.3117 | 0.5291 | 0.2732 | 0.4817 | 0.2299 | 0.4308 | 0.3282 | 0.4684 |
| 0.6396 | 40.9953 | 4366 | 1.0215 | 0.3425 | 0.621 | 0.3171 | 0.2082 | 0.2728 | 0.5362 | 0.3092 | 0.4871 | 0.5079 | 0.2951 | 0.4554 | 0.7131 | 0.5877 | 0.7027 | 0.3052 | 0.4937 | 0.288 | 0.475 | 0.1991 | 0.4 | 0.3324 | 0.468 |
| 0.625 | 42.0 | 4473 | 1.0178 | 0.3484 | 0.6218 | 0.3461 | 0.2131 | 0.2829 | 0.5369 | 0.3012 | 0.4949 | 0.5168 | 0.3244 | 0.4631 | 0.7096 | 0.5735 | 0.6932 | 0.3088 | 0.5127 | 0.2923 | 0.479 | 0.2422 | 0.4246 | 0.3251 | 0.4742 |
| 0.6251 | 42.9953 | 4579 | 1.0301 | 0.3461 | 0.6247 | 0.3439 | 0.2131 | 0.2702 | 0.5491 | 0.306 | 0.4934 | 0.5121 | 0.2848 | 0.4461 | 0.7165 | 0.5833 | 0.6901 | 0.301 | 0.5051 | 0.2873 | 0.4835 | 0.2335 | 0.4123 | 0.3255 | 0.4693 |
| 0.6135 | 44.0 | 4686 | 1.0139 | 0.3493 | 0.6317 | 0.3372 | 0.2248 | 0.2862 | 0.5572 | 0.3072 | 0.4894 | 0.5117 | 0.3177 | 0.4626 | 0.7082 | 0.5812 | 0.6914 | 0.3109 | 0.5152 | 0.2954 | 0.4839 | 0.223 | 0.3846 | 0.3361 | 0.4836 |
| 0.6144 | 44.9953 | 4792 | 1.0285 | 0.3509 | 0.6356 | 0.3476 | 0.224 | 0.2788 | 0.5406 | 0.3077 | 0.4964 | 0.5158 | 0.3294 | 0.4636 | 0.7093 | 0.5641 | 0.6739 | 0.3102 | 0.5013 | 0.2928 | 0.4839 | 0.2503 | 0.4338 | 0.3372 | 0.4862 |
| 0.6108 | 46.0 | 4899 | 1.0337 | 0.3399 | 0.6192 | 0.3317 | 0.2009 | 0.2756 | 0.5243 | 0.2977 | 0.4824 | 0.503 | 0.2895 | 0.4587 | 0.6878 | 0.5765 | 0.6788 | 0.3125 | 0.4975 | 0.2873 | 0.4719 | 0.2004 | 0.3923 | 0.323 | 0.4747 |
| 0.6081 | 46.9953 | 5005 | 1.0228 | 0.3444 | 0.6169 | 0.341 | 0.2173 | 0.281 | 0.5443 | 0.3041 | 0.4928 | 0.5181 | 0.3604 | 0.462 | 0.7082 | 0.5721 | 0.6887 | 0.313 | 0.5038 | 0.2941 | 0.4754 | 0.2061 | 0.4292 | 0.3367 | 0.4933 |
| 0.5857 | 48.0 | 5112 | 1.0137 | 0.348 | 0.6196 | 0.3419 | 0.2091 | 0.2733 | 0.5542 | 0.308 | 0.4916 | 0.5113 | 0.297 | 0.4554 | 0.7015 | 0.5836 | 0.6905 | 0.3218 | 0.5089 | 0.2964 | 0.4848 | 0.214 | 0.3908 | 0.3241 | 0.4813 |
| 0.5963 | 48.9953 | 5218 | 1.0341 | 0.3433 | 0.6262 | 0.3411 | 0.218 | 0.2703 | 0.5395 | 0.3019 | 0.4899 | 0.5166 | 0.3304 | 0.4661 | 0.7068 | 0.5829 | 0.6941 | 0.2951 | 0.5013 | 0.2938 | 0.4888 | 0.2044 | 0.4123 | 0.3405 | 0.4862 |
| 0.5828 | 50.0 | 5325 | 1.0220 | 0.3402 | 0.6157 | 0.3433 | 0.2175 | 0.2748 | 0.5347 | 0.3054 | 0.496 | 0.5184 | 0.321 | 0.4733 | 0.7146 | 0.5809 | 0.6977 | 0.3071 | 0.4975 | 0.29 | 0.4857 | 0.1992 | 0.4292 | 0.3238 | 0.4818 |
| 0.583 | 50.9953 | 5431 | 1.0163 | 0.3483 | 0.6272 | 0.3378 | 0.2054 | 0.2819 | 0.554 | 0.3081 | 0.4954 | 0.5174 | 0.3028 | 0.4757 | 0.701 | 0.5748 | 0.6896 | 0.3137 | 0.5165 | 0.2908 | 0.4866 | 0.234 | 0.4185 | 0.3284 | 0.476 |
| 0.5574 | 52.0 | 5538 | 1.0144 | 0.3462 | 0.6241 | 0.3344 | 0.2138 | 0.2757 | 0.5384 | 0.3036 | 0.4917 | 0.5163 | 0.3087 | 0.4663 | 0.7117 | 0.5794 | 0.6941 | 0.2983 | 0.4987 | 0.3086 | 0.4853 | 0.222 | 0.4385 | 0.3225 | 0.4649 |
| 0.565 | 52.9953 | 5644 | 1.0384 | 0.3361 | 0.6159 | 0.3305 | 0.2078 | 0.2691 | 0.5349 | 0.3004 | 0.4874 | 0.511 | 0.3151 | 0.4652 | 0.7071 | 0.5649 | 0.6851 | 0.2748 | 0.4886 | 0.2937 | 0.4888 | 0.222 | 0.42 | 0.325 | 0.4724 |
| 0.5545 | 54.0 | 5751 | 1.0353 | 0.338 | 0.6151 | 0.3341 | 0.2055 | 0.2616 | 0.5487 | 0.3045 | 0.4796 | 0.5039 | 0.334 | 0.4435 | 0.7153 | 0.5729 | 0.6779 | 0.2751 | 0.4797 | 0.2887 | 0.4714 | 0.2189 | 0.4092 | 0.3342 | 0.4813 |
| 0.5533 | 54.9953 | 5857 | 1.0273 | 0.3478 | 0.6322 | 0.3282 | 0.2019 | 0.2765 | 0.5498 | 0.3091 | 0.4827 | 0.5049 | 0.289 | 0.4568 | 0.7001 | 0.5784 | 0.6815 | 0.3102 | 0.4975 | 0.2974 | 0.4799 | 0.2306 | 0.4031 | 0.3223 | 0.4627 |
| 0.5343 | 56.0 | 5964 | 1.0379 | 0.3489 | 0.631 | 0.3353 | 0.2161 | 0.2711 | 0.5601 | 0.3114 | 0.4885 | 0.5098 | 0.3114 | 0.4584 | 0.721 | 0.5659 | 0.6824 | 0.3237 | 0.5051 | 0.3084 | 0.4853 | 0.2272 | 0.4092 | 0.3193 | 0.4671 |
| 0.5468 | 56.9953 | 6070 | 1.0393 | 0.3454 | 0.6401 | 0.3253 | 0.2075 | 0.2698 | 0.5397 | 0.3079 | 0.4848 | 0.5033 | 0.3134 | 0.4366 | 0.686 | 0.5642 | 0.668 | 0.3186 | 0.4797 | 0.2974 | 0.4915 | 0.2271 | 0.4092 | 0.3196 | 0.468 |
| 0.5431 | 58.0 | 6177 | 1.0278 | 0.344 | 0.63 | 0.3249 | 0.2067 | 0.2747 | 0.5383 | 0.3093 | 0.4896 | 0.5083 | 0.2966 | 0.4465 | 0.7067 | 0.5634 | 0.6671 | 0.3113 | 0.4962 | 0.2989 | 0.4902 | 0.2229 | 0.4138 | 0.3235 | 0.4742 |
| 0.5339 | 58.9953 | 6283 | 1.0319 | 0.3408 | 0.6341 | 0.3086 | 0.214 | 0.2713 | 0.5379 | 0.3031 | 0.4853 | 0.505 | 0.2896 | 0.4521 | 0.7106 | 0.5694 | 0.6815 | 0.2998 | 0.4709 | 0.3092 | 0.4978 | 0.2052 | 0.4092 | 0.3205 | 0.4658 |
| 0.5334 | 60.0 | 6390 | 1.0279 | 0.3496 | 0.6412 | 0.3359 | 0.2139 | 0.2859 | 0.5352 | 0.3089 | 0.4929 | 0.5133 | 0.3019 | 0.4679 | 0.7023 | 0.5635 | 0.6802 | 0.3222 | 0.5 | 0.3064 | 0.4879 | 0.2309 | 0.4277 | 0.3249 | 0.4707 |
| 0.5289 | 60.9953 | 6496 | 1.0478 | 0.3506 | 0.6433 | 0.321 | 0.2228 | 0.2723 | 0.5336 | 0.304 | 0.4892 | 0.5068 | 0.297 | 0.4413 | 0.693 | 0.5761 | 0.6838 | 0.3275 | 0.4873 | 0.303 | 0.4696 | 0.2246 | 0.4308 | 0.3221 | 0.4622 |
| 0.5091 | 62.0 | 6603 | 1.0321 | 0.3484 | 0.6408 | 0.3242 | 0.2247 | 0.2711 | 0.5639 | 0.309 | 0.4896 | 0.5119 | 0.3139 | 0.4471 | 0.7109 | 0.5561 | 0.6779 | 0.3219 | 0.4848 | 0.297 | 0.4951 | 0.245 | 0.4308 | 0.3222 | 0.4707 |
| 0.5239 | 62.9953 | 6709 | 1.0277 | 0.3516 | 0.6421 | 0.3371 | 0.2329 | 0.2764 | 0.5504 | 0.3045 | 0.491 | 0.5096 | 0.3178 | 0.4378 | 0.7055 | 0.5655 | 0.6766 | 0.3156 | 0.4646 | 0.3137 | 0.4973 | 0.2267 | 0.44 | 0.3365 | 0.4693 |
| 0.5025 | 64.0 | 6816 | 1.0331 | 0.3474 | 0.6385 | 0.3144 | 0.2295 | 0.286 | 0.5378 | 0.3095 | 0.4949 | 0.5131 | 0.3183 | 0.4571 | 0.7047 | 0.5603 | 0.6878 | 0.3093 | 0.4722 | 0.3043 | 0.4866 | 0.235 | 0.4538 | 0.328 | 0.4649 |
| 0.4991 | 64.9953 | 6922 | 1.0323 | 0.3457 | 0.6359 | 0.3158 | 0.2193 | 0.2808 | 0.5367 | 0.3091 | 0.485 | 0.507 | 0.2986 | 0.4511 | 0.7043 | 0.5653 | 0.6838 | 0.3106 | 0.481 | 0.2987 | 0.4804 | 0.2308 | 0.4169 | 0.3229 | 0.4729 |
| 0.4969 | 66.0 | 7029 | 1.0250 | 0.3448 | 0.6468 | 0.317 | 0.23 | 0.2781 | 0.5449 | 0.3031 | 0.4919 | 0.5109 | 0.3493 | 0.4479 | 0.6951 | 0.5477 | 0.6707 | 0.311 | 0.4911 | 0.3046 | 0.4924 | 0.2424 | 0.4369 | 0.318 | 0.4631 |
| 0.496 | 66.9953 | 7135 | 1.0300 | 0.3461 | 0.6272 | 0.325 | 0.2262 | 0.2772 | 0.5318 | 0.3097 | 0.4913 | 0.5118 | 0.3102 | 0.4598 | 0.6967 | 0.5751 | 0.6941 | 0.3067 | 0.4696 | 0.3018 | 0.4812 | 0.2244 | 0.4415 | 0.3224 | 0.4724 |
| 0.4788 | 68.0 | 7242 | 1.0452 | 0.3402 | 0.6215 | 0.3124 | 0.2249 | 0.2665 | 0.5439 | 0.3048 | 0.4923 | 0.5102 | 0.338 | 0.4531 | 0.7045 | 0.5618 | 0.6806 | 0.3051 | 0.4797 | 0.2956 | 0.4741 | 0.2261 | 0.4492 | 0.3127 | 0.4671 |
| 0.4807 | 68.9953 | 7348 | 1.0359 | 0.3461 | 0.6413 | 0.3141 | 0.2239 | 0.2736 | 0.5437 | 0.3079 | 0.4836 | 0.5028 | 0.3072 | 0.4486 | 0.6881 | 0.5605 | 0.6685 | 0.3124 | 0.4696 | 0.3014 | 0.4808 | 0.2347 | 0.4231 | 0.3216 | 0.472 |
| 0.4756 | 70.0 | 7455 | 1.0310 | 0.3522 | 0.6459 | 0.3281 | 0.222 | 0.2816 | 0.5535 | 0.3078 | 0.4878 | 0.5082 | 0.2998 | 0.4607 | 0.7001 | 0.5784 | 0.6919 | 0.3348 | 0.4823 | 0.2935 | 0.4763 | 0.2294 | 0.4169 | 0.325 | 0.4738 |
| 0.4738 | 70.9953 | 7561 | 1.0440 | 0.3422 | 0.6271 | 0.3121 | 0.2157 | 0.2757 | 0.5429 | 0.31 | 0.483 | 0.5072 | 0.3103 | 0.4482 | 0.6956 | 0.5563 | 0.6703 | 0.3091 | 0.4835 | 0.2995 | 0.4848 | 0.225 | 0.4277 | 0.3208 | 0.4698 |
| 0.4674 | 72.0 | 7668 | 1.0492 | 0.3474 | 0.6321 | 0.3275 | 0.2241 | 0.2692 | 0.548 | 0.314 | 0.4834 | 0.5043 | 0.2946 | 0.4371 | 0.707 | 0.5626 | 0.6703 | 0.3187 | 0.4975 | 0.295 | 0.4754 | 0.2413 | 0.4123 | 0.3192 | 0.4658 |
| 0.4592 | 72.9953 | 7774 | 1.0344 | 0.3545 | 0.6457 | 0.3418 | 0.2373 | 0.2789 | 0.559 | 0.314 | 0.4912 | 0.5084 | 0.3124 | 0.4394 | 0.6999 | 0.5673 | 0.6766 | 0.3326 | 0.4949 | 0.3024 | 0.4728 | 0.2468 | 0.4277 | 0.3234 | 0.4702 |
| 0.4584 | 74.0 | 7881 | 1.0381 | 0.3451 | 0.6271 | 0.3398 | 0.2388 | 0.2702 | 0.5483 | 0.3144 | 0.4882 | 0.5113 | 0.3169 | 0.4468 | 0.7077 | 0.5631 | 0.6788 | 0.3154 | 0.4873 | 0.2912 | 0.4812 | 0.2251 | 0.4323 | 0.3307 | 0.4769 |
| 0.4648 | 74.9953 | 7987 | 1.0515 | 0.3506 | 0.6384 | 0.3369 | 0.2269 | 0.2659 | 0.5582 | 0.3168 | 0.4842 | 0.5067 | 0.3154 | 0.4328 | 0.6958 | 0.5646 | 0.6694 | 0.3313 | 0.4886 | 0.2944 | 0.4741 | 0.2264 | 0.4262 | 0.3365 | 0.4751 |
| 0.4487 | 76.0 | 8094 | 1.0408 | 0.3479 | 0.6399 | 0.3232 | 0.2223 | 0.2783 | 0.5555 | 0.3149 | 0.486 | 0.5077 | 0.322 | 0.4466 | 0.699 | 0.5573 | 0.6671 | 0.3233 | 0.4924 | 0.2944 | 0.4795 | 0.2408 | 0.4369 | 0.3235 | 0.4627 |
| 0.4482 | 76.9953 | 8200 | 1.0368 | 0.3505 | 0.6396 | 0.3153 | 0.2229 | 0.2796 | 0.5552 | 0.3173 | 0.487 | 0.508 | 0.3088 | 0.4459 | 0.7042 | 0.5604 | 0.6716 | 0.3201 | 0.4911 | 0.2959 | 0.4786 | 0.2532 | 0.4292 | 0.323 | 0.4693 |
| 0.4359 | 78.0 | 8307 | 1.0456 | 0.3488 | 0.6434 | 0.3181 | 0.2308 | 0.2708 | 0.549 | 0.3117 | 0.4823 | 0.5029 | 0.3098 | 0.4397 | 0.6933 | 0.5704 | 0.6716 | 0.3174 | 0.4823 | 0.2983 | 0.4777 | 0.2353 | 0.4185 | 0.3227 | 0.4644 |
| 0.445 | 78.9953 | 8413 | 1.0598 | 0.3472 | 0.638 | 0.3149 | 0.2236 | 0.2719 | 0.5492 | 0.313 | 0.4873 | 0.5051 | 0.2966 | 0.4461 | 0.7045 | 0.5497 | 0.6649 | 0.3331 | 0.5013 | 0.2907 | 0.4737 | 0.2397 | 0.4231 | 0.3226 | 0.4627 |
| 0.4328 | 80.0 | 8520 | 1.0558 | 0.3477 | 0.6419 | 0.3326 | 0.2335 | 0.2701 | 0.5483 | 0.3112 | 0.4865 | 0.506 | 0.3232 | 0.432 | 0.7048 | 0.5599 | 0.668 | 0.3244 | 0.4823 | 0.292 | 0.4772 | 0.2396 | 0.4369 | 0.3228 | 0.4658 |
| 0.4321 | 80.9953 | 8626 | 1.0447 | 0.3474 | 0.6387 | 0.3308 | 0.2348 | 0.2749 | 0.5457 | 0.3155 | 0.485 | 0.506 | 0.3189 | 0.4442 | 0.6897 | 0.5604 | 0.6689 | 0.3165 | 0.4785 | 0.2932 | 0.4777 | 0.2362 | 0.4338 | 0.3307 | 0.4711 |
| 0.4318 | 82.0 | 8733 | 1.0476 | 0.3496 | 0.6408 | 0.3292 | 0.2301 | 0.2767 | 0.543 | 0.3118 | 0.4895 | 0.51 | 0.3248 | 0.4397 | 0.6942 | 0.5553 | 0.6734 | 0.3188 | 0.4797 | 0.2965 | 0.479 | 0.2481 | 0.4462 | 0.3296 | 0.4716 |
| 0.4273 | 82.9953 | 8839 | 1.0585 | 0.348 | 0.6396 | 0.3286 | 0.2309 | 0.2694 | 0.5526 | 0.3126 | 0.4843 | 0.5035 | 0.318 | 0.4372 | 0.6912 | 0.5532 | 0.6622 | 0.3262 | 0.4861 | 0.2999 | 0.4763 | 0.2363 | 0.4246 | 0.3245 | 0.4684 |
| 0.4252 | 84.0 | 8946 | 1.0618 | 0.3439 | 0.6332 | 0.3285 | 0.2312 | 0.272 | 0.5316 | 0.3148 | 0.4854 | 0.5036 | 0.3173 | 0.4412 | 0.6932 | 0.5538 | 0.6626 | 0.3153 | 0.4848 | 0.2879 | 0.4723 | 0.2455 | 0.4431 | 0.3168 | 0.4551 |
| 0.4158 | 84.9953 | 9052 | 1.0631 | 0.3484 | 0.6387 | 0.3257 | 0.2285 | 0.2694 | 0.5474 | 0.3152 | 0.4841 | 0.5047 | 0.3141 | 0.4346 | 0.6906 | 0.5595 | 0.6671 | 0.3351 | 0.4823 | 0.287 | 0.4732 | 0.2374 | 0.4354 | 0.323 | 0.4653 |
| 0.4088 | 86.0 | 9159 | 1.0561 | 0.3525 | 0.6421 | 0.3247 | 0.2286 | 0.2768 | 0.553 | 0.3143 | 0.4856 | 0.5067 | 0.311 | 0.4405 | 0.7017 | 0.5626 | 0.6694 | 0.3299 | 0.4861 | 0.2942 | 0.4719 | 0.249 | 0.4446 | 0.3266 | 0.4618 |
| 0.4194 | 86.9953 | 9265 | 1.0613 | 0.3499 | 0.6399 | 0.3269 | 0.227 | 0.2727 | 0.5463 | 0.3132 | 0.4821 | 0.5012 | 0.3048 | 0.4328 | 0.6896 | 0.562 | 0.664 | 0.3359 | 0.4911 | 0.2915 | 0.4638 | 0.2386 | 0.4246 | 0.3216 | 0.4627 |
| 0.4087 | 88.0 | 9372 | 1.0764 | 0.3484 | 0.6358 | 0.3282 | 0.222 | 0.2707 | 0.556 | 0.3145 | 0.4806 | 0.4996 | 0.2991 | 0.4302 | 0.6907 | 0.5541 | 0.6599 | 0.3344 | 0.4848 | 0.2866 | 0.4701 | 0.2455 | 0.4185 | 0.3215 | 0.4649 |
| 0.4031 | 88.9953 | 9478 | 1.0731 | 0.3494 | 0.6316 | 0.3283 | 0.2258 | 0.2685 | 0.5512 | 0.3144 | 0.4852 | 0.5048 | 0.3054 | 0.4275 | 0.7027 | 0.5574 | 0.6617 | 0.3338 | 0.4911 | 0.2962 | 0.4679 | 0.2356 | 0.4354 | 0.3238 | 0.468 |
| 0.3968 | 90.0 | 9585 | 1.0613 | 0.35 | 0.6386 | 0.3379 | 0.2254 | 0.2739 | 0.5538 | 0.3105 | 0.4855 | 0.5043 | 0.3148 | 0.4376 | 0.6953 | 0.559 | 0.6721 | 0.335 | 0.4835 | 0.292 | 0.4679 | 0.2386 | 0.4323 | 0.3253 | 0.4658 |
| 0.398 | 90.9953 | 9691 | 1.0688 | 0.345 | 0.6347 | 0.3302 | 0.2158 | 0.2666 | 0.5543 | 0.3107 | 0.4786 | 0.4958 | 0.2918 | 0.4253 | 0.6988 | 0.5566 | 0.6622 | 0.3228 | 0.4772 | 0.2946 | 0.4643 | 0.2317 | 0.4154 | 0.319 | 0.46 |
| 0.3964 | 92.0 | 9798 | 1.0728 | 0.3467 | 0.6344 | 0.3302 | 0.2195 | 0.265 | 0.5541 | 0.313 | 0.4814 | 0.5 | 0.2978 | 0.4227 | 0.7024 | 0.5625 | 0.6703 | 0.3321 | 0.4873 | 0.2908 | 0.458 | 0.2296 | 0.4246 | 0.3187 | 0.4596 |
| 0.3947 | 92.9953 | 9904 | 1.0710 | 0.3458 | 0.6355 | 0.3264 | 0.2211 | 0.267 | 0.5534 | 0.3141 | 0.4837 | 0.5022 | 0.3055 | 0.4276 | 0.7005 | 0.5555 | 0.6667 | 0.3352 | 0.4899 | 0.2911 | 0.4616 | 0.2274 | 0.4292 | 0.3198 | 0.4636 |
| 0.3903 | 94.0 | 10011 | 1.0691 | 0.3465 | 0.6375 | 0.3308 | 0.227 | 0.2659 | 0.5562 | 0.3099 | 0.4822 | 0.502 | 0.3012 | 0.431 | 0.7001 | 0.5563 | 0.6626 | 0.3354 | 0.4911 | 0.2883 | 0.4643 | 0.229 | 0.4231 | 0.3234 | 0.4689 |
| 0.3943 | 94.9953 | 10117 | 1.0708 | 0.3489 | 0.6366 | 0.3274 | 0.2278 | 0.2647 | 0.5588 | 0.3154 | 0.4838 | 0.502 | 0.3023 | 0.4258 | 0.704 | 0.5611 | 0.6653 | 0.3382 | 0.4873 | 0.2922 | 0.4643 | 0.2312 | 0.4292 | 0.3216 | 0.464 |
| 0.3855 | 96.0 | 10224 | 1.0744 | 0.349 | 0.6353 | 0.3315 | 0.2215 | 0.2689 | 0.5566 | 0.318 | 0.4813 | 0.5011 | 0.2983 | 0.431 | 0.6985 | 0.5574 | 0.664 | 0.3432 | 0.4848 | 0.2908 | 0.4621 | 0.231 | 0.4292 | 0.3224 | 0.4653 |
| 0.3833 | 96.9953 | 10330 | 1.0692 | 0.3474 | 0.6341 | 0.3282 | 0.2244 | 0.2693 | 0.5567 | 0.3167 | 0.4833 | 0.5038 | 0.3031 | 0.4318 | 0.7037 | 0.5553 | 0.6649 | 0.3348 | 0.4861 | 0.2918 | 0.4652 | 0.2323 | 0.4385 | 0.3227 | 0.4644 |
| 0.3838 | 98.0 | 10437 | 1.0700 | 0.3464 | 0.6366 | 0.3231 | 0.2234 | 0.2692 | 0.554 | 0.3155 | 0.4842 | 0.5026 | 0.3011 | 0.4319 | 0.702 | 0.5559 | 0.6658 | 0.3326 | 0.4873 | 0.2878 | 0.4594 | 0.232 | 0.4323 | 0.3239 | 0.4684 |
| 0.385 | 98.9953 | 10543 | 1.0722 | 0.3471 | 0.6369 | 0.3259 | 0.2192 | 0.2683 | 0.5535 | 0.3142 | 0.4823 | 0.5023 | 0.2976 | 0.4298 | 0.7033 | 0.5569 | 0.6676 | 0.3329 | 0.4823 | 0.2887 | 0.4612 | 0.2342 | 0.4354 | 0.3225 | 0.4649 |
| 0.3644 | 99.5305 | 10600 | 1.0727 | 0.3471 | 0.6376 | 0.3269 | 0.2239 | 0.2687 | 0.5542 | 0.3146 | 0.485 | 0.5045 | 0.3033 | 0.4315 | 0.7048 | 0.556 | 0.6671 | 0.3346 | 0.4873 | 0.2879 | 0.4625 | 0.2348 | 0.44 | 0.3221 | 0.4653 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.18.0
- Tokenizers 0.19.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "SenseTime/deformable-detr", "model-index": [{"name": "sensetime-deformable-detr-finetuned-10k-cppe5-reproduce", "results": []}]} | qubvel-hf/sensetime-deformable-detr-finetuned-10k-cppe5-reproduce | null | [
"transformers",
"safetensors",
"deformable_detr",
"object-detection",
"generated_from_trainer",
"base_model:SenseTime/deformable-detr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:21:35+00:00 | [] | [] | TAGS
#transformers #safetensors #deformable_detr #object-detection #generated_from_trainer #base_model-SenseTime/deformable-detr #license-apache-2.0 #endpoints_compatible #region-us
| <img src="URL alt="Visualize in Weights & Biases" width="200" height="32"/>
sensetime-deformable-detr-finetuned-10k-cppe5-reproduce
=======================================================
This model is a fine-tuned version of SenseTime/deformable-detr on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0727
* Map: 0.3471
* Map 50: 0.6376
* Map 75: 0.3269
* Map Small: 0.2239
* Map Medium: 0.2687
* Map Large: 0.5542
* Mar 1: 0.3146
* Mar 10: 0.485
* Mar 100: 0.5045
* Mar Small: 0.3033
* Mar Medium: 0.4315
* Mar Large: 0.7048
* Map Coverall: 0.556
* Mar 100 Coverall: 0.6671
* Map Face Shield: 0.3346
* Mar 100 Face Shield: 0.4873
* Map Gloves: 0.2879
* Mar 100 Gloves: 0.4625
* Map Goggles: 0.2348
* Mar 100 Goggles: 0.44
* Map Mask: 0.3221
* Mar 100 Mask: 0.4653
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 1337
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 1.13.0+cu117
* Datasets 2.18.0
* Tokenizers 0.19.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 1337\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 1.13.0+cu117\n* Datasets 2.18.0\n* Tokenizers 0.19.0"
] | [
"TAGS\n#transformers #safetensors #deformable_detr #object-detection #generated_from_trainer #base_model-SenseTime/deformable-detr #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 1337\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 1.13.0+cu117\n* Datasets 2.18.0\n* Tokenizers 0.19.0"
] |
voice-activity-detection | pyannote | This is the model card of a pyannote model that has been pushed on the Hub. This model card has been automatically generated. | {"library_name": "pyannote", "tags": ["pyannote", "pyannote.audio", "pyannote-audio-model", "audio", "voice", "speech", "speaker", "speaker-diarization", "speaker-change-detection", "speaker-segmentation", "voice-activity-detection", "overlapped-speech-detection", "resegmentation", "speaker-recognition", "speaker-verification", "speaker-identification", "speaker-embedding", "PyTorch", "wespeaker"], "licence": "mit"} | kamilakesbi/speaker-segmentation-test | null | [
"pyannote",
"pytorch",
"pyannote.audio",
"pyannote-audio-model",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"speaker-segmentation",
"voice-activity-detection",
"overlapped-speech-detection",
"resegmentation",
"speaker-recognition",
"speaker-verification",
"speaker-identification",
"speaker-embedding",
"PyTorch",
"wespeaker",
"region:us"
] | null | 2024-04-29T13:21:53+00:00 | [] | [] | TAGS
#pyannote #pytorch #pyannote.audio #pyannote-audio-model #audio #voice #speech #speaker #speaker-diarization #speaker-change-detection #speaker-segmentation #voice-activity-detection #overlapped-speech-detection #resegmentation #speaker-recognition #speaker-verification #speaker-identification #speaker-embedding #PyTorch #wespeaker #region-us
| This is the model card of a pyannote model that has been pushed on the Hub. This model card has been automatically generated. | [] | [
"TAGS\n#pyannote #pytorch #pyannote.audio #pyannote-audio-model #audio #voice #speech #speaker #speaker-diarization #speaker-change-detection #speaker-segmentation #voice-activity-detection #overlapped-speech-detection #resegmentation #speaker-recognition #speaker-verification #speaker-identification #speaker-embedding #PyTorch #wespeaker #region-us \n"
] |
null | transformers |
# Model Card for Model ID
<!-- -->
## Model Details
Shivneri Marathi LLM is being built with the wish to bring the benefits of Generative AI to non-English (especially Marathi) speaking population of India.
Marathi has the third largest number of native speakers in India, after Hindi and Bengali.
Almost 83 million people speak the language.
This is a preliminary version of our Marathi LLM (Large Language Model)!
Built on the mighty Llama3 8B instruct model, Shivneri LLM can generate creative and informative text in both Marathi and English. This is just the beginning β we're constantly improving Shivneri, and even more exciting features are on the horizon!
### Model Description
<!-- -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Amit Ghadge
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [Amit Ghadge]
- **Model type:** [ Decoder-only large language model (LLM) with a transformer architecture]
- **Language(s) (NLP):** [Marathi, English]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [Meta-Llama-3-8B-Instruct]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/amitagh/shivneri-llm]
- **Paper [optional]:** [https://www.linkedin.com/pulse/releasing-shivneri-llm-instruct-model-version-amit-ghadge-j051f/]
- **Demo [optional]:** [Coming soon]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This is a very preliminary version. Please use with caution. Would suggest to more updates and final models to try out.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[SFT with Lora on mentioned datasets above]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
SFT with Lora
### Model Architecture and Objective
[ Decoder-only large language model (LLM) with a transformer architecture]
### Compute Infrastructure
[A100 80 GB]
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Amit Ghadge](https://www.linkedin.com/in/amit-ghadge-a162a115/)
## Model Release Date May 1st, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
## License
The model inherits the license from meta-llama3.
## How to use
Use pretty much remains the same as original Meta-Llama-3-8B-Instruct model. Visit its page for more details.
With this model you can now use Marathi prompts and build conversational apps using it.
## Citation [optional]
If you use this model in your research, please cite:
```bibtex
@misc{amitghadge2024ShivneriLLMv01,
title={Shivneri-LLM: Your Bilingual Marathi and English Text Generation LLM},
author={Amit Ghadge},
year={2024},
eprint={https://www.linkedin.com/pulse/releasing-shivneri-llm-instruct-model-version-amit-ghadge-j051f/},
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Marathi language.
| {"language": ["mr", "en"], "license": "llama3", "library_name": "transformers", "datasets": ["smallstepai/marathi-instruction-tuning-alpaca", "ai4bharat/indic-align"]} | amitagh/shivneri-llm-it-v0.2-GGUF | null | [
"transformers",
"gguf",
"mr",
"en",
"dataset:smallstepai/marathi-instruction-tuning-alpaca",
"dataset:ai4bharat/indic-align",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-29T13:24:25+00:00 | [] | [
"mr",
"en"
] | TAGS
#transformers #gguf #mr #en #dataset-smallstepai/marathi-instruction-tuning-alpaca #dataset-ai4bharat/indic-align #license-llama3 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
Shivneri Marathi LLM is being built with the wish to bring the benefits of Generative AI to non-English (especially Marathi) speaking population of India.
Marathi has the third largest number of native speakers in India, after Hindi and Bengali.
Almost 83 million people speak the language.
This is a preliminary version of our Marathi LLM (Large Language Model)!
Built on the mighty Llama3 8B instruct model, Shivneri LLM can generate creative and informative text in both Marathi and English. This is just the beginning β we're constantly improving Shivneri, and even more exciting features are on the horizon!
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Amit Ghadge
- Funded by [optional]:
- Shared by [optional]: [Amit Ghadge]
- Model type: [ Decoder-only large language model (LLM) with a transformer architecture]
- Language(s) (NLP): [Marathi, English]
- License:
- Finetuned from model [optional]: [Meta-Llama-3-8B-Instruct]
### Model Sources [optional]
- Repository: [URL
- Paper [optional]: [URL
- Demo [optional]: [Coming soon]
## Uses
This is a very preliminary version. Please use with caution. Would suggest to more updates and final models to try out.
## Training Details
### Training Data
[SFT with Lora on mentioned datasets above]
### Training Procedure
SFT with Lora
### Model Architecture and Objective
[ Decoder-only large language model (LLM) with a transformer architecture]
### Compute Infrastructure
[A100 80 GB]
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- Amit Ghadge
## Model Release Date May 1st, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
## License
The model inherits the license from meta-llama3.
## How to use
Use pretty much remains the same as original Meta-Llama-3-8B-Instruct model. Visit its page for more details.
With this model you can now use Marathi prompts and build conversational apps using it.
[optional]
If you use this model in your research, please cite:
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Marathi language.
| [
"# Model Card for Model ID",
"## Model Details\nShivneri Marathi LLM is being built with the wish to bring the benefits of Generative AI to non-English (especially Marathi) speaking population of India.\nMarathi has the third largest number of native speakers in India, after Hindi and Bengali. \nAlmost 83 million people speak the language. \nThis is a preliminary version of our Marathi LLM (Large Language Model)!\nBuilt on the mighty Llama3 8B instruct model, Shivneri LLM can generate creative and informative text in both Marathi and English. This is just the beginning β we're constantly improving Shivneri, and even more exciting features are on the horizon!",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Amit Ghadge\n- Funded by [optional]: \n- Shared by [optional]: [Amit Ghadge]\n- Model type: [ Decoder-only large language model (LLM) with a transformer architecture]\n- Language(s) (NLP): [Marathi, English]\n- License: \n- Finetuned from model [optional]: [Meta-Llama-3-8B-Instruct]",
"### Model Sources [optional]\n\n\n\n- Repository: [URL\n- Paper [optional]: [URL\n- Demo [optional]: [Coming soon]",
"## Uses\n\n\nThis is a very preliminary version. Please use with caution. Would suggest to more updates and final models to try out.",
"## Training Details",
"### Training Data\n\n\n\n[SFT with Lora on mentioned datasets above]",
"### Training Procedure\n\n\nSFT with Lora",
"### Model Architecture and Objective\n\n[ Decoder-only large language model (LLM) with a transformer architecture]",
"### Compute Infrastructure\n\n[A100 80 GB]",
"## Meet the Developers\n\nGet to know the creators behind this innovative model and follow their contributions to the field:\n\n- Amit Ghadge",
"## Model Release Date May 1st, 2024.\n\nStatus This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.",
"## License\nThe model inherits the license from meta-llama3.",
"## How to use \nUse pretty much remains the same as original Meta-Llama-3-8B-Instruct model. Visit its page for more details.\nWith this model you can now use Marathi prompts and build conversational apps using it.\n\n[optional]\n\nIf you use this model in your research, please cite:\n\n\n\nWe hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Marathi language."
] | [
"TAGS\n#transformers #gguf #mr #en #dataset-smallstepai/marathi-instruction-tuning-alpaca #dataset-ai4bharat/indic-align #license-llama3 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details\nShivneri Marathi LLM is being built with the wish to bring the benefits of Generative AI to non-English (especially Marathi) speaking population of India.\nMarathi has the third largest number of native speakers in India, after Hindi and Bengali. \nAlmost 83 million people speak the language. \nThis is a preliminary version of our Marathi LLM (Large Language Model)!\nBuilt on the mighty Llama3 8B instruct model, Shivneri LLM can generate creative and informative text in both Marathi and English. This is just the beginning β we're constantly improving Shivneri, and even more exciting features are on the horizon!",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Amit Ghadge\n- Funded by [optional]: \n- Shared by [optional]: [Amit Ghadge]\n- Model type: [ Decoder-only large language model (LLM) with a transformer architecture]\n- Language(s) (NLP): [Marathi, English]\n- License: \n- Finetuned from model [optional]: [Meta-Llama-3-8B-Instruct]",
"### Model Sources [optional]\n\n\n\n- Repository: [URL\n- Paper [optional]: [URL\n- Demo [optional]: [Coming soon]",
"## Uses\n\n\nThis is a very preliminary version. Please use with caution. Would suggest to more updates and final models to try out.",
"## Training Details",
"### Training Data\n\n\n\n[SFT with Lora on mentioned datasets above]",
"### Training Procedure\n\n\nSFT with Lora",
"### Model Architecture and Objective\n\n[ Decoder-only large language model (LLM) with a transformer architecture]",
"### Compute Infrastructure\n\n[A100 80 GB]",
"## Meet the Developers\n\nGet to know the creators behind this innovative model and follow their contributions to the field:\n\n- Amit Ghadge",
"## Model Release Date May 1st, 2024.\n\nStatus This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.",
"## License\nThe model inherits the license from meta-llama3.",
"## How to use \nUse pretty much remains the same as original Meta-Llama-3-8B-Instruct model. Visit its page for more details.\nWith this model you can now use Marathi prompts and build conversational apps using it.\n\n[optional]\n\nIf you use this model in your research, please cite:\n\n\n\nWe hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Marathi language."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/codegemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-codegemma-2b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-codegemma-2b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/codegemma-2b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/codegemma-2b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "google/codegemma-2b"} | PrunaAI/google-codegemma-2b-HQQ-2bit-smashed | null | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/codegemma-2b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:24:44+00:00 | [] | [] | TAGS
#transformers #gemma #text-generation #pruna-ai #base_model-google/codegemma-2b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/codegemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/codegemma-2b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/codegemma-2b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/codegemma-2b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gemma #text-generation #pruna-ai #base_model-google/codegemma-2b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo google/codegemma-2b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model google/codegemma-2b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | atmansingh/medai | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:25:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information. | {"license": "apache-2.0"} | GreenBitAI/Llama-3-8B-layer-mix-bpw-4.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-29T13:25:19+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information. | [
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.