pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** TheHappyDrone
- **License:** apache-2.0
- **Finetuned from model :** TheHappyDrone/Occult_V02
Chat Format
```
### Instruction:
You are a helpful assistant, follow the user's instructions.
### Input:
Please translate "Je suis désolé, je n'ai pas de monnaie. Les vents emportent les richesses." into english.
### Response:
```
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "TheHappyDrone/Occult_V02"}
|
TheHappyDrone/Uoxudo_V2
| null |
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:TheHappyDrone/Occult_V02",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:47:10+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-TheHappyDrone/Occult_V02 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: TheHappyDrone
- License: apache-2.0
- Finetuned from model : TheHappyDrone/Occult_V02
Chat Format
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: TheHappyDrone\n- License: apache-2.0\n- Finetuned from model : TheHappyDrone/Occult_V02\n\nChat Format\n\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-TheHappyDrone/Occult_V02 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: TheHappyDrone\n- License: apache-2.0\n- Finetuned from model : TheHappyDrone/Occult_V02\n\nChat Format\n\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uzbek_asr_model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3074
- eval_wer: 0.3507
- eval_runtime: 712.3103
- eval_samples_per_second: 16.282
- eval_steps_per_second: 2.036
- epoch: 1.27
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_8_0"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "uzbek_asr_model", "results": []}]}
|
zohirjonsharipov/uzbek_asr_model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:47:19+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us
|
# uzbek_asr_model
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3074
- eval_wer: 0.3507
- eval_runtime: 712.3103
- eval_samples_per_second: 16.282
- eval_steps_per_second: 2.036
- epoch: 1.27
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# uzbek_asr_model\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_8_0 dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.3074\n- eval_wer: 0.3507\n- eval_runtime: 712.3103\n- eval_samples_per_second: 16.282\n- eval_steps_per_second: 2.036\n- epoch: 1.27\n- step: 2000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 15\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us \n",
"# uzbek_asr_model\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_8_0 dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.3074\n- eval_wer: 0.3507\n- eval_runtime: 712.3103\n- eval_samples_per_second: 16.282\n- eval_steps_per_second: 2.036\n- epoch: 1.27\n- step: 2000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 15\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
reinforcement-learning
|
ml-agents
|
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pdejong/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
{"library_name": "ml-agents", "tags": ["Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids"]}
|
pdejong/ppo-Pyramids
| null |
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | null |
2024-04-12T18:52:00+00:00
|
[] |
[] |
TAGS
#ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us
|
# ppo Agent playing Pyramids
This is a trained model of a ppo agent playing Pyramids
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: pdejong/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
|
[
"# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: pdejong/ppo-Pyramids\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
[
"TAGS\n#ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us \n",
"# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: pdejong/ppo-Pyramids\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0
|
{"license": "apache-2.0", "tags": ["axolotl", "generated_from_trainer", "text-generation-inference"], "inference": false, "model_type": "mistral", "pipeline_tag": "text-generation", "widget": [{"messages": [{"role": "user", "content": "I want to close an online account"}]}], "model-index": [{"name": "Mistral-7B-Banking", "results": []}]}
|
bitext-llm/Mistral-7B-Banking
| null |
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"text-generation-inference",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null |
2024-04-12T18:52:05+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #pytorch #safetensors #mistral #text-generation #axolotl #generated_from_trainer #text-generation-inference #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #mistral #text-generation #axolotl #generated_from_trainer #text-generation-inference #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** lancer59
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
|
lancer59/finetunedMistral7bit30ksetA
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:52:45+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: lancer59
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: lancer59\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: lancer59\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdp-aa-classifier-limited
This model is a fine-tuned version of [alex-miller/ODABert](https://huggingface.co/alex-miller/ODABert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5419
- Accuracy: 0.8214
- F1: 0.8214
- Precision: 0.8214
- Recall: 0.8214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 0.7049 | 1.0 | 10 | 0.4464 | 0.3111 | 0.6903 | 0.4118 | 0.25 |
| 0.6958 | 2.0 | 20 | 0.5536 | 0.4898 | 0.6849 | 0.5714 | 0.4286 |
| 0.6822 | 3.0 | 30 | 0.6071 | 0.5769 | 0.6799 | 0.625 | 0.5357 |
| 0.6787 | 4.0 | 40 | 0.6964 | 0.6792 | 0.6758 | 0.72 | 0.6429 |
| 0.6758 | 5.0 | 50 | 0.75 | 0.7586 | 0.6715 | 0.7333 | 0.7857 |
| 0.6653 | 6.0 | 60 | 0.7321 | 0.7368 | 0.6672 | 0.7241 | 0.75 |
| 0.6618 | 7.0 | 70 | 0.7679 | 0.7797 | 0.6632 | 0.7419 | 0.8214 |
| 0.6633 | 8.0 | 80 | 0.7679 | 0.7797 | 0.6587 | 0.7419 | 0.8214 |
| 0.655 | 9.0 | 90 | 0.7679 | 0.7797 | 0.6543 | 0.7419 | 0.8214 |
| 0.6473 | 10.0 | 100 | 0.7857 | 0.7931 | 0.6500 | 0.7667 | 0.8214 |
| 0.6455 | 11.0 | 110 | 0.7679 | 0.7797 | 0.6459 | 0.7419 | 0.8214 |
| 0.6421 | 12.0 | 120 | 0.7857 | 0.8000 | 0.6420 | 0.75 | 0.8571 |
| 0.639 | 13.0 | 130 | 0.7857 | 0.8000 | 0.6380 | 0.75 | 0.8571 |
| 0.6303 | 14.0 | 140 | 0.8214 | 0.8276 | 0.6344 | 0.8 | 0.8571 |
| 0.6266 | 15.0 | 150 | 0.8214 | 0.8276 | 0.6313 | 0.8 | 0.8571 |
| 0.6242 | 16.0 | 160 | 0.8214 | 0.8276 | 0.6286 | 0.8 | 0.8571 |
| 0.6219 | 17.0 | 170 | 0.8214 | 0.8276 | 0.6264 | 0.8 | 0.8571 |
| 0.6231 | 18.0 | 180 | 0.8214 | 0.8276 | 0.6248 | 0.8 | 0.8571 |
| 0.6269 | 19.0 | 190 | 0.8214 | 0.8276 | 0.6238 | 0.8 | 0.8571 |
| 0.6239 | 20.0 | 200 | 0.8214 | 0.8276 | 0.6234 | 0.8 | 0.8571 |
| 0.6136 | 21.0 | 210 | 0.6170 | 0.8214 | 0.8276 | 0.8 | 0.8571 |
| 0.6191 | 22.0 | 220 | 0.6099 | 0.8393 | 0.8421 | 0.8276 | 0.8571 |
| 0.604 | 23.0 | 230 | 0.6034 | 0.8393 | 0.8421 | 0.8276 | 0.8571 |
| 0.6091 | 24.0 | 240 | 0.5970 | 0.8393 | 0.8421 | 0.8276 | 0.8571 |
| 0.5993 | 25.0 | 250 | 0.5910 | 0.8393 | 0.8421 | 0.8276 | 0.8571 |
| 0.5965 | 26.0 | 260 | 0.5849 | 0.8393 | 0.8421 | 0.8276 | 0.8571 |
| 0.587 | 27.0 | 270 | 0.5793 | 0.8393 | 0.8421 | 0.8276 | 0.8571 |
| 0.5804 | 28.0 | 280 | 0.5737 | 0.8393 | 0.8421 | 0.8276 | 0.8571 |
| 0.5738 | 29.0 | 290 | 0.5687 | 0.8393 | 0.8421 | 0.8276 | 0.8571 |
| 0.5823 | 30.0 | 300 | 0.5641 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.5699 | 31.0 | 310 | 0.5601 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.5536 | 32.0 | 320 | 0.5563 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.5652 | 33.0 | 330 | 0.5527 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.5769 | 34.0 | 340 | 0.5497 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.5517 | 35.0 | 350 | 0.5476 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.5549 | 36.0 | 360 | 0.5455 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.548 | 37.0 | 370 | 0.5440 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.555 | 38.0 | 380 | 0.5429 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.5414 | 39.0 | 390 | 0.5422 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.5469 | 40.0 | 400 | 0.5419 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "alex-miller/ODABert", "model-index": [{"name": "cdp-aa-classifier-limited", "results": []}]}
|
alex-miller/cdp-aa-classifier-limited
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:alex-miller/ODABert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:52:47+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-alex-miller/ODABert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
cdp-aa-classifier-limited
=========================
This model is a fine-tuned version of alex-miller/ODABert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5419
* Accuracy: 0.8214
* F1: 0.8214
* Precision: 0.8214
* Recall: 0.8214
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 40
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.0.1
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 40",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-alex-miller/ODABert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 40",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Uploaded as 16bit model
- **Developed by:** Labagaite
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-chat-bnb-4bit
# Training Logs
## Summary metrics
### Best ROUGE-1 score : **0.007804878048780488**
### Best ROUGE-2 score : **0**
### Best ROUGE-L score : **0.005853658536585366**
## Wandb logs
You can view the training logs [<img src="https://raw.githubusercontent.com/wandb/wandb/main/docs/README_images/logo-light.svg" width="200"/>](https://wandb.ai/william-derue/LLM-summarizer_trainer/runs/9wgdnwzd).
## Training details
### training data
- Dataset : [fr-summarizer-dataset](https://huggingface.co/datasets/Labagaite/fr-summarizer-dataset)
- Data-size : 7.65 MB
- train : 1.97k rows
- validation : 440 rows
- roles : user , assistant
- Format chatml "role": "role", "content": "content", "user": "user", "assistant": "assistant"
<br>
*French audio podcast transcription*
# Project details
[<img src="https://avatars.githubusercontent.com/u/116890814?v=4" width="100"/>](https://github.com/WillIsback/Report_Maker)
Fine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.
The model will be used for an AI application: [Report Maker](https://github.com/WillIsback/Report_Maker) wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.
It leverages state-of-the-art machine learning models to provide detailed and accurate reports.
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
This llama was trained with [LLM summarizer trainer](images/Llm_Summarizer_trainer_icon-removebg.png)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
**LLM summarizer trainer**
[<img src="images/Llm_Summarizer_trainer_icon-removebg.png" width="150"/>](https://github.com/WillIsback/LLM_Summarizer_Trainer)
|
{"language": ["fr"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "summarizer", "16bit"], "base_model": "unsloth/llama-2-7b-chat-bnb-4bit"}
|
Labagaite/llama-Summarizer-2-7b-chat
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"summarizer",
"16bit",
"conversational",
"fr",
"base_model:unsloth/llama-2-7b-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:55:02+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #summarizer #16bit #conversational #fr #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded as 16bit model
- Developed by: Labagaite
- License: apache-2.0
- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit
# Training Logs
## Summary metrics
### Best ROUGE-1 score : 0.007804878048780488
### Best ROUGE-2 score : 0
### Best ROUGE-L score : 0.005853658536585366
## Wandb logs
You can view the training logs <img src="URL width="200"/>.
## Training details
### training data
- Dataset : fr-summarizer-dataset
- Data-size : 7.65 MB
- train : 1.97k rows
- validation : 440 rows
- roles : user , assistant
- Format chatml "role": "role", "content": "content", "user": "user", "assistant": "assistant"
<br>
*French audio podcast transcription*
# Project details
<img src="URL width="100"/>
Fine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.
The model will be used for an AI application: Report Maker wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.
It leverages state-of-the-art machine learning models to provide detailed and accurate reports.
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
This llama was trained with LLM summarizer trainer
<img src="URL width="200"/>
LLM summarizer trainer
<img src="images/Llm_Summarizer_trainer_icon-URL" width="150"/>
|
[
"# Uploaded as 16bit model\n\n- Developed by: Labagaite\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit",
"# Training Logs",
"## Summary metrics",
"### Best ROUGE-1 score : 0.007804878048780488",
"### Best ROUGE-2 score : 0",
"### Best ROUGE-L score : 0.005853658536585366",
"## Wandb logs\nYou can view the training logs <img src=\"URL width=\"200\"/>.",
"## Training details",
"### training data\n- Dataset : fr-summarizer-dataset\n- Data-size : 7.65 MB\n- train : 1.97k rows\n- validation : 440 rows\n- roles : user , assistant\n- Format chatml \"role\": \"role\", \"content\": \"content\", \"user\": \"user\", \"assistant\": \"assistant\"\n<br>\n*French audio podcast transcription*",
"# Project details\n<img src=\"URL width=\"100\"/>\nFine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.\nThe model will be used for an AI application: Report Maker wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.\nIt leverages state-of-the-art machine learning models to provide detailed and accurate reports.\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\nThis llama was trained with LLM summarizer trainer\n<img src=\"URL width=\"200\"/>\nLLM summarizer trainer\n<img src=\"images/Llm_Summarizer_trainer_icon-URL\" width=\"150\"/>"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #summarizer #16bit #conversational #fr #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded as 16bit model\n\n- Developed by: Labagaite\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit",
"# Training Logs",
"## Summary metrics",
"### Best ROUGE-1 score : 0.007804878048780488",
"### Best ROUGE-2 score : 0",
"### Best ROUGE-L score : 0.005853658536585366",
"## Wandb logs\nYou can view the training logs <img src=\"URL width=\"200\"/>.",
"## Training details",
"### training data\n- Dataset : fr-summarizer-dataset\n- Data-size : 7.65 MB\n- train : 1.97k rows\n- validation : 440 rows\n- roles : user , assistant\n- Format chatml \"role\": \"role\", \"content\": \"content\", \"user\": \"user\", \"assistant\": \"assistant\"\n<br>\n*French audio podcast transcription*",
"# Project details\n<img src=\"URL width=\"100\"/>\nFine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.\nThe model will be used for an AI application: Report Maker wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.\nIt leverages state-of-the-art machine learning models to provide detailed and accurate reports.\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\nThis llama was trained with LLM summarizer trainer\n<img src=\"URL width=\"200\"/>\nLLM summarizer trainer\n<img src=\"images/Llm_Summarizer_trainer_icon-URL\" width=\"150\"/>"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
EdBerg/Llama-2-7b-Chat-GPTQ
| null |
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-12T18:55:05+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #opt #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #opt #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** lancer59
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
|
lancer59/finetunedMistral7bit30kset
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null |
2024-04-12T18:59:37+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: lancer59
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: lancer59\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: lancer59\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|

# Pyrhea-72B
Pyrhea-72B is a merge of the following models using [Mergekit](https://github.com/arcee-ai/mergekit):
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration
```yamlname: Pyrhea-72B
models:
- model: davidkim205/Rhea-72b-v0.5
parameters:
density: 0.5
weight: 0.6
# No parameters necessary for base model
- model: abacusai/Smaug-72B-v0.1
parameters:
density: 0.5
weight: 0.4
merge_method: dare_ties
base_model: davidkim205/Rhea-72b-v0.5
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/Pyrhea-72B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "davidkim205/Rhea-72b-v0.5", "abacusai/Smaug-72B-v0.1"], "base_model": ["davidkim205/Rhea-72b-v0.5", "abacusai/Smaug-72B-v0.1"]}
|
saucam/Pyrhea-72B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"davidkim205/Rhea-72b-v0.5",
"abacusai/Smaug-72B-v0.1",
"base_model:davidkim205/Rhea-72b-v0.5",
"base_model:abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:00:13+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #davidkim205/Rhea-72b-v0.5 #abacusai/Smaug-72B-v0.1 #base_model-davidkim205/Rhea-72b-v0.5 #base_model-abacusai/Smaug-72B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
 (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
domenicrosati/beavertails_attack_meta-llama_Llama-2-7b-chat-hf_3e-5_1k
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:00:49+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi 2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4422
- Wer: 32.4134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0918 | 2.44 | 1000 | 0.2982 | 34.9699 |
| 0.0214 | 4.89 | 2000 | 0.3545 | 33.8610 |
| 0.0013 | 7.33 | 3000 | 0.4166 | 32.3161 |
| 0.0004 | 9.78 | 4000 | 0.4422 | 32.4134 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"language": ["hi"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Hi 2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0", "type": "mozilla-foundation/common_voice_11_0", "config": "hi", "split": "None", "args": "config: hi, split: test"}, "metrics": [{"type": "wer", "value": 32.41344281723525, "name": "Wer"}]}]}]}
|
glenn2/whisper-small-hi
| null |
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:02:01+00:00
|
[] |
[
"hi"
] |
TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
Whisper Small Hi 2
==================
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4422
* Wer: 32.4134
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0886
- Precision: 0.6800
- Recall: 0.7967
- F1: 0.7338
- Accuracy: 0.9790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1005 | 1.0 | 679 | 0.0703 | 0.5884 | 0.7865 | 0.6732 | 0.9755 |
| 0.0469 | 2.0 | 1358 | 0.0699 | 0.6953 | 0.8030 | 0.7453 | 0.9797 |
| 0.014 | 3.0 | 2037 | 0.0886 | 0.6800 | 0.7967 | 0.7338 | 0.9790 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "biogpt", "results": []}]}
|
yiyang01/biogpt
| null |
[
"transformers",
"tensorboard",
"safetensors",
"biogpt",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:05:31+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #biogpt #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
biogpt
======
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0886
* Precision: 0.6800
* Recall: 0.7967
* F1: 0.7338
* Accuracy: 0.9790
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #biogpt #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
token-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
SweetZiyi/MLMA_Lab8_Task2
| null |
[
"transformers",
"safetensors",
"gpt2",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:07:47+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Anaphase21/bloom_for_english
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:12:24+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# THIS IS THE GGUF VERSION OF MISTED V2 7B. TO VIEW ORIGINAL MODEL [CLICK THIS](https://huggingface.co/walmart-the-bag/misted-v2-7b)
# Misted v2 7B
This is another version of [misted-7b](https://huggingface.co/walmart-the-bag/misted-7b). This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.
|
{"language": ["en", "es"], "license": "apache-2.0", "library_name": "transformers", "tags": ["code", "mistral", "merge", "slerp"], "pipeline_tag": "text-generation", "inference": false}
|
Walmart-the-bag/Misted-v2-7B-gguf
| null |
[
"transformers",
"gguf",
"code",
"mistral",
"merge",
"slerp",
"text-generation",
"en",
"es",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T19:14:18+00:00
|
[] |
[
"en",
"es"
] |
TAGS
#transformers #gguf #code #mistral #merge #slerp #text-generation #en #es #license-apache-2.0 #region-us
|
# THIS IS THE GGUF VERSION OF MISTED V2 7B. TO VIEW ORIGINAL MODEL CLICK THIS
# Misted v2 7B
This is another version of misted-7b. This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.
|
[
"# THIS IS THE GGUF VERSION OF MISTED V2 7B. TO VIEW ORIGINAL MODEL CLICK THIS",
"# Misted v2 7B\nThis is another version of misted-7b. This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b."
] |
[
"TAGS\n#transformers #gguf #code #mistral #merge #slerp #text-generation #en #es #license-apache-2.0 #region-us \n",
"# THIS IS THE GGUF VERSION OF MISTED V2 7B. TO VIEW ORIGINAL MODEL CLICK THIS",
"# Misted v2 7B\nThis is another version of misted-7b. This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b."
] |
null |
transformers
|
# Uploaded model
- **Developed by:** czaplon
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
|
czaplon/new-BB-devil
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:16:19+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: czaplon
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: czaplon\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: czaplon\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2_PT
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2_PT", "results": []}]}
|
vitorandrade/phi-2_PT
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-12T19:16:28+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
|
# phi-2_PT
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# phi-2_PT\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n",
"# phi-2_PT\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Uploaded as 4bit model
- **Developed by:** Labagaite
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-chat-bnb-4bit
# Training Logs
## Summary metrics
### Best ROUGE-1 score : **0.007804878048780488**
### Best ROUGE-2 score : **0**
### Best ROUGE-L score : **0.005853658536585366**
## Wandb logs
You can view the training logs [<img src="https://raw.githubusercontent.com/wandb/wandb/main/docs/README_images/logo-light.svg" width="200"/>](https://wandb.ai/william-derue/LLM-summarizer_trainer/runs/9wgdnwzd).
## Training details
### training data
- Dataset : [fr-summarizer-dataset](https://huggingface.co/datasets/Labagaite/fr-summarizer-dataset)
- Data-size : 7.65 MB
- train : 1.97k rows
- validation : 440 rows
- roles : user , assistant
- Format chatml "role": "role", "content": "content", "user": "user", "assistant": "assistant"
<br>
*French audio podcast transcription*
# Project details
[<img src="https://avatars.githubusercontent.com/u/116890814?v=4" width="100"/>](https://github.com/WillIsback/Report_Maker)
Fine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.
The model will be used for an AI application: [Report Maker](https://github.com/WillIsback/Report_Maker) wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.
It leverages state-of-the-art machine learning models to provide detailed and accurate reports.
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
This llama was trained with [LLM summarizer trainer](images/Llm_Summarizer_trainer_icon-removebg.png)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
**LLM summarizer trainer**
[<img src="images/Llm_Summarizer_trainer_icon-removebg.png" width="150"/>](https://github.com/WillIsback/LLM_Summarizer_Trainer)
|
{"language": ["fr"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "summarizer", "4bit"], "base_model": "unsloth/llama-2-7b-chat-bnb-4bit"}
|
Labagaite/llama-Summarizer-2-7b-chat-bnb-4bit
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"summarizer",
"4bit",
"conversational",
"fr",
"base_model:unsloth/llama-2-7b-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null |
2024-04-12T19:17:51+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #summarizer #4bit #conversational #fr #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded as 4bit model
- Developed by: Labagaite
- License: apache-2.0
- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit
# Training Logs
## Summary metrics
### Best ROUGE-1 score : 0.007804878048780488
### Best ROUGE-2 score : 0
### Best ROUGE-L score : 0.005853658536585366
## Wandb logs
You can view the training logs <img src="URL width="200"/>.
## Training details
### training data
- Dataset : fr-summarizer-dataset
- Data-size : 7.65 MB
- train : 1.97k rows
- validation : 440 rows
- roles : user , assistant
- Format chatml "role": "role", "content": "content", "user": "user", "assistant": "assistant"
<br>
*French audio podcast transcription*
# Project details
<img src="URL width="100"/>
Fine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.
The model will be used for an AI application: Report Maker wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.
It leverages state-of-the-art machine learning models to provide detailed and accurate reports.
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
This llama was trained with LLM summarizer trainer
<img src="URL width="200"/>
LLM summarizer trainer
<img src="images/Llm_Summarizer_trainer_icon-URL" width="150"/>
|
[
"# Uploaded as 4bit model\n\n- Developed by: Labagaite\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit",
"# Training Logs",
"## Summary metrics",
"### Best ROUGE-1 score : 0.007804878048780488",
"### Best ROUGE-2 score : 0",
"### Best ROUGE-L score : 0.005853658536585366",
"## Wandb logs\nYou can view the training logs <img src=\"URL width=\"200\"/>.",
"## Training details",
"### training data\n- Dataset : fr-summarizer-dataset\n- Data-size : 7.65 MB\n- train : 1.97k rows\n- validation : 440 rows\n- roles : user , assistant\n- Format chatml \"role\": \"role\", \"content\": \"content\", \"user\": \"user\", \"assistant\": \"assistant\"\n<br>\n*French audio podcast transcription*",
"# Project details\n<img src=\"URL width=\"100\"/>\nFine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.\nThe model will be used for an AI application: Report Maker wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.\nIt leverages state-of-the-art machine learning models to provide detailed and accurate reports.\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\nThis llama was trained with LLM summarizer trainer\n<img src=\"URL width=\"200\"/>\nLLM summarizer trainer\n<img src=\"images/Llm_Summarizer_trainer_icon-URL\" width=\"150\"/>"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #summarizer #4bit #conversational #fr #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded as 4bit model\n\n- Developed by: Labagaite\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit",
"# Training Logs",
"## Summary metrics",
"### Best ROUGE-1 score : 0.007804878048780488",
"### Best ROUGE-2 score : 0",
"### Best ROUGE-L score : 0.005853658536585366",
"## Wandb logs\nYou can view the training logs <img src=\"URL width=\"200\"/>.",
"## Training details",
"### training data\n- Dataset : fr-summarizer-dataset\n- Data-size : 7.65 MB\n- train : 1.97k rows\n- validation : 440 rows\n- roles : user , assistant\n- Format chatml \"role\": \"role\", \"content\": \"content\", \"user\": \"user\", \"assistant\": \"assistant\"\n<br>\n*French audio podcast transcription*",
"# Project details\n<img src=\"URL width=\"100\"/>\nFine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.\nThe model will be used for an AI application: Report Maker wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.\nIt leverages state-of-the-art machine learning models to provide detailed and accurate reports.\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\nThis llama was trained with LLM summarizer trainer\n<img src=\"URL width=\"200\"/>\nLLM summarizer trainer\n<img src=\"images/Llm_Summarizer_trainer_icon-URL\" width=\"150\"/>"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-maplestory-ner-v2-test
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1849
- Precision: 0.8130
- Recall: 0.8378
- F1: 0.8252
- Accuracy: 0.9673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1976 | 1.0 | 2703 | 0.1770 | 0.7226 | 0.7824 | 0.7513 | 0.9516 |
| 0.1174 | 2.0 | 5406 | 0.1516 | 0.7860 | 0.8090 | 0.7974 | 0.9633 |
| 0.062 | 3.0 | 8109 | 0.1526 | 0.8108 | 0.8184 | 0.8146 | 0.9661 |
| 0.0334 | 4.0 | 10812 | 0.1648 | 0.8061 | 0.8378 | 0.8216 | 0.9668 |
| 0.0177 | 5.0 | 13515 | 0.1849 | 0.8130 | 0.8378 | 0.8252 | 0.9673 |
### Framework versions
- Transformers 4.39.3
- Pytorch 1.12.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-large-cased", "model-index": [{"name": "bert-large-cased-maplestory-ner-v2-test", "results": []}]}
|
nxaliao/bert-large-cased-maplestory-ner-v2-test
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-large-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:19:20+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-large-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-large-cased-maplestory-ner-v2-test
=======================================
This model is a fine-tuned version of bert-large-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1849
* Precision: 0.8130
* Recall: 0.8378
* F1: 0.8252
* Accuracy: 0.9673
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 1.12.0
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.12.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-large-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.12.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_dataup_noreplacerej_40g_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0_dataup_noreplacerej_40g_iter_1", "results": []}]}
|
ZhangShenao/0.0_dataup_noreplacerej_40g_iter_1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:19:42+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.0_dataup_noreplacerej_40g_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
[
"# 0.0_dataup_noreplacerej_40g_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 128\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.0_dataup_noreplacerej_40g_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 128\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Fredithefish/MysticNoromaidx
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MysticNoromaidx-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "base_model": "Fredithefish/MysticNoromaidx", "quantized_by": "mradermacher"}
|
mradermacher/MysticNoromaidx-i1-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:Fredithefish/MysticNoromaidx",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:23:42+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-Fredithefish/MysticNoromaidx #endpoints_compatible #region-us
|
About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-Fredithefish/MysticNoromaidx #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2-finetuned-imdb
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.4078 | 1.0 | 1563 | 6.1208 |
| 6.1804 | 2.0 | 3126 | 6.0142 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["generator"], "base_model": "cointegrated/rubert-tiny2", "model-index": [{"name": "rubert-tiny2-finetuned-imdb", "results": []}]}
|
Pastushoc/rubert-tiny2-finetuned-imdb
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:generator",
"base_model:cointegrated/rubert-tiny2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:24:09+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #dataset-generator #base_model-cointegrated/rubert-tiny2 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
rubert-tiny2-finetuned-imdb
===========================
This model is a fine-tuned version of cointegrated/rubert-tiny2 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 6.0142
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #dataset-generator #base_model-cointegrated/rubert-tiny2 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Uploaded as lora model
- **Developed by:** Labagaite
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-chat-bnb-4bit
# Training Logs
## Summary metrics
### Best ROUGE-1 score : **0.007804878048780488**
### Best ROUGE-2 score : **0**
### Best ROUGE-L score : **0.005853658536585366**
## Wandb logs
You can view the training logs [<img src="https://raw.githubusercontent.com/wandb/wandb/main/docs/README_images/logo-light.svg" width="200"/>](https://wandb.ai/william-derue/LLM-summarizer_trainer/runs/9wgdnwzd).
## Training details
### training data
- Dataset : [fr-summarizer-dataset](https://huggingface.co/datasets/Labagaite/fr-summarizer-dataset)
- Data-size : 7.65 MB
- train : 1.97k rows
- validation : 440 rows
- roles : user , assistant
- Format chatml "role": "role", "content": "content", "user": "user", "assistant": "assistant"
<br>
*French audio podcast transcription*
# Project details
[<img src="https://avatars.githubusercontent.com/u/116890814?v=4" width="100"/>](https://github.com/WillIsback/Report_Maker)
Fine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.
The model will be used for an AI application: [Report Maker](https://github.com/WillIsback/Report_Maker) wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.
It leverages state-of-the-art machine learning models to provide detailed and accurate reports.
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
This llama was trained with [LLM summarizer trainer](images/Llm_Summarizer_trainer_icon-removebg.png)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
**LLM summarizer trainer**
[<img src="images/Llm_Summarizer_trainer_icon-removebg.png" width="150"/>](https://github.com/WillIsback/LLM_Summarizer_Trainer)
|
{"language": ["fr"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "summarizer", "lora"], "base_model": "unsloth/llama-2-7b-chat-bnb-4bit"}
|
Labagaite/llama-Summarizer-2-7b-chat-LORA-bnb-4bit
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"summarizer",
"lora",
"conversational",
"fr",
"base_model:unsloth/llama-2-7b-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null |
2024-04-12T19:26:11+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #summarizer #lora #conversational #fr #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded as lora model
- Developed by: Labagaite
- License: apache-2.0
- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit
# Training Logs
## Summary metrics
### Best ROUGE-1 score : 0.007804878048780488
### Best ROUGE-2 score : 0
### Best ROUGE-L score : 0.005853658536585366
## Wandb logs
You can view the training logs <img src="URL width="200"/>.
## Training details
### training data
- Dataset : fr-summarizer-dataset
- Data-size : 7.65 MB
- train : 1.97k rows
- validation : 440 rows
- roles : user , assistant
- Format chatml "role": "role", "content": "content", "user": "user", "assistant": "assistant"
<br>
*French audio podcast transcription*
# Project details
<img src="URL width="100"/>
Fine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.
The model will be used for an AI application: Report Maker wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.
It leverages state-of-the-art machine learning models to provide detailed and accurate reports.
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
This llama was trained with LLM summarizer trainer
<img src="URL width="200"/>
LLM summarizer trainer
<img src="images/Llm_Summarizer_trainer_icon-URL" width="150"/>
|
[
"# Uploaded as lora model\n\n- Developed by: Labagaite\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit",
"# Training Logs",
"## Summary metrics",
"### Best ROUGE-1 score : 0.007804878048780488",
"### Best ROUGE-2 score : 0",
"### Best ROUGE-L score : 0.005853658536585366",
"## Wandb logs\nYou can view the training logs <img src=\"URL width=\"200\"/>.",
"## Training details",
"### training data\n- Dataset : fr-summarizer-dataset\n- Data-size : 7.65 MB\n- train : 1.97k rows\n- validation : 440 rows\n- roles : user , assistant\n- Format chatml \"role\": \"role\", \"content\": \"content\", \"user\": \"user\", \"assistant\": \"assistant\"\n<br>\n*French audio podcast transcription*",
"# Project details\n<img src=\"URL width=\"100\"/>\nFine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.\nThe model will be used for an AI application: Report Maker wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.\nIt leverages state-of-the-art machine learning models to provide detailed and accurate reports.\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\nThis llama was trained with LLM summarizer trainer\n<img src=\"URL width=\"200\"/>\nLLM summarizer trainer\n<img src=\"images/Llm_Summarizer_trainer_icon-URL\" width=\"150\"/>"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #summarizer #lora #conversational #fr #base_model-unsloth/llama-2-7b-chat-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded as lora model\n\n- Developed by: Labagaite\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-7b-chat-bnb-4bit",
"# Training Logs",
"## Summary metrics",
"### Best ROUGE-1 score : 0.007804878048780488",
"### Best ROUGE-2 score : 0",
"### Best ROUGE-L score : 0.005853658536585366",
"## Wandb logs\nYou can view the training logs <img src=\"URL width=\"200\"/>.",
"## Training details",
"### training data\n- Dataset : fr-summarizer-dataset\n- Data-size : 7.65 MB\n- train : 1.97k rows\n- validation : 440 rows\n- roles : user , assistant\n- Format chatml \"role\": \"role\", \"content\": \"content\", \"user\": \"user\", \"assistant\": \"assistant\"\n<br>\n*French audio podcast transcription*",
"# Project details\n<img src=\"URL width=\"100\"/>\nFine-tuned on French audio podcast transcription data for summarization task. As a result, the model is able to summarize French audio podcast transcription data.\nThe model will be used for an AI application: Report Maker wich is a powerful tool designed to automate the process of transcribing and summarizing meetings.\nIt leverages state-of-the-art machine learning models to provide detailed and accurate reports.\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\nThis llama was trained with LLM summarizer trainer\n<img src=\"URL width=\"200\"/>\nLLM summarizer trainer\n<img src=\"images/Llm_Summarizer_trainer_icon-URL\" width=\"150\"/>"
] |
sentence-similarity
|
sentence-transformers
|
# ciCic/paraphrase-multilingual-MiniLM-L12-v2-sts-2d-matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ciCic/paraphrase-multilingual-MiniLM-L12-v2-sts-2d-matryoshka')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ciCic/paraphrase-multilingual-MiniLM-L12-v2-sts-2d-matryoshka')
model = AutoModel.from_pretrained('ciCic/paraphrase-multilingual-MiniLM-L12-v2-sts-2d-matryoshka')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ciCic/paraphrase-multilingual-MiniLM-L12-v2-sts-2d-matryoshka)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.Matryoshka2dLoss.Matryoshka2dLoss` with parameters:
```
{'loss': 'CoSENTLoss', 'n_layers_per_step': 1, 'last_layer_weight': 1.0, 'prior_layers_weight': 1.0, 'kl_div_weight': 1.0, 'kl_temperature': 0.3, 'matryoshka_dims': [768, 512, 256, 128, 64], 'matryoshka_weights': [1, 1, 1, 1, 1], 'n_dims_per_step': 1}
```
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
ciCic/paraphrase-multilingual-MiniLM-L12-v2-sts-2d-matryoshka
| null |
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:26:25+00:00
|
[] |
[] |
TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# ciCic/paraphrase-multilingual-MiniLM-L12-v2-sts-2d-matryoshka
This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 360 with parameters:
Loss:
'sentence_transformers.losses.Matryoshka2dLoss.Matryoshka2dLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# ciCic/paraphrase-multilingual-MiniLM-L12-v2-sts-2d-matryoshka\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 360 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.Matryoshka2dLoss.Matryoshka2dLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# ciCic/paraphrase-multilingual-MiniLM-L12-v2-sts-2d-matryoshka\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 360 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.Matryoshka2dLoss.Matryoshka2dLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
null |
mlx
|
# mlx-community/zephyr-orpo-141b-A35b-v0.1-8bit
This model was converted to MLX format from [`HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1`]() using mlx-lm version **0.9.0**.
Refer to the [original model card](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/zephyr-orpo-141b-A35b-v0.1-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"license": "apache-2.0", "tags": ["trl", "orpo", "generated_from_trainer", "mlx"], "datasets": ["argilla/distilabel-capybara-dpo-7k-binarized"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "inference": {"parameters": {"temperature": 0.7}}, "model-index": [{"name": "zephyr-orpo-141b-A35b-v0.1", "results": []}]}
|
mlx-community/zephyr-orpo-141b-A35b-v0.1-8bit
| null |
[
"mlx",
"safetensors",
"mixtral",
"trl",
"orpo",
"generated_from_trainer",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T19:26:46+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #mixtral #trl #orpo #generated_from_trainer #dataset-argilla/distilabel-capybara-dpo-7k-binarized #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #region-us
|
# mlx-community/zephyr-orpo-141b-A35b-v0.1-8bit
This model was converted to MLX format from ['HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1']() using mlx-lm version 0.9.0.
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# mlx-community/zephyr-orpo-141b-A35b-v0.1-8bit\nThis model was converted to MLX format from ['HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1']() using mlx-lm version 0.9.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #mixtral #trl #orpo #generated_from_trainer #dataset-argilla/distilabel-capybara-dpo-7k-binarized #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #region-us \n",
"# mlx-community/zephyr-orpo-141b-A35b-v0.1-8bit\nThis model was converted to MLX format from ['HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1']() using mlx-lm version 0.9.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
text-classification
|
transformers
|
## Metrics
- loss: 1.0342
- accuracy: 0.8359
- precision: 0.8409
- recall: 0.8359
- precision_macro: 0.8136
- recall_macro: 0.8000
- macro_fpr: 0.0142
- weighted_fpr: 0.0138
- weighted_specificity: 0.9792
- macro_specificity: 0.9877
- weighted_sensitivity: 0.8359
- macro_sensitivity: 0.8000
- f1_micro: 0.8359
- f1_macro: 0.8010
- f1_weighted: 0.8352
- runtime: 19.9583
- samples_per_second: 64.4340
- steps_per_second: 8.0670
## Metrics
- loss: 1.0345
- accuracy: 0.8358
- precision: 0.8408
- recall: 0.8358
- precision_macro: 0.8207
- recall_macro: 0.7957
- macro_fpr: 0.0143
- weighted_fpr: 0.0138
- weighted_specificity: 0.9790
- macro_specificity: 0.9877
- weighted_sensitivity: 0.8358
- macro_sensitivity: 0.7957
- f1_micro: 0.8358
- f1_macro: 0.8020
- f1_weighted: 0.8352
- runtime: 22.0569
- samples_per_second: 58.5300
- steps_per_second: 7.3450
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# InLegalBERT
This model is a fine-tuned version of [law-ai/InLegalBERT](https://huggingface.co/law-ai/InLegalBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0763
- Accuracy: 0.8304
- Precision: 0.8363
- Recall: 0.8304
- Precision Macro: 0.7959
- Recall Macro: 0.8029
- Macro Fpr: 0.0150
- Weighted Fpr: 0.0145
- Weighted Specificity: 0.9774
- Macro Specificity: 0.9871
- Weighted Sensitivity: 0.8296
- Macro Sensitivity: 0.8029
- F1 Micro: 0.8296
- F1 Macro: 0.7954
- F1 Weighted: 0.8283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
| 1.065 | 1.0 | 643 | 0.6395 | 0.7994 | 0.7818 | 0.7994 | 0.6194 | 0.6308 | 0.0185 | 0.0176 | 0.9714 | 0.9847 | 0.7994 | 0.6308 | 0.7994 | 0.6029 | 0.7804 |
| 0.5866 | 2.0 | 1286 | 0.6907 | 0.8187 | 0.8199 | 0.8187 | 0.7285 | 0.7366 | 0.0161 | 0.0156 | 0.9765 | 0.9864 | 0.8187 | 0.7366 | 0.8187 | 0.7276 | 0.8152 |
| 0.4622 | 3.0 | 1929 | 0.8056 | 0.8180 | 0.8137 | 0.8180 | 0.7227 | 0.7376 | 0.0162 | 0.0156 | 0.9764 | 0.9863 | 0.8180 | 0.7376 | 0.8180 | 0.7283 | 0.8150 |
| 0.2398 | 4.0 | 2572 | 0.9310 | 0.8172 | 0.8235 | 0.8172 | 0.7661 | 0.7425 | 0.0161 | 0.0157 | 0.9762 | 0.9862 | 0.8172 | 0.7425 | 0.8172 | 0.7407 | 0.8161 |
| 0.1611 | 5.0 | 3215 | 1.0763 | 0.8304 | 0.8363 | 0.8304 | 0.8174 | 0.7918 | 0.0148 | 0.0144 | 0.9784 | 0.9873 | 0.8304 | 0.7918 | 0.8304 | 0.7986 | 0.8304 |
| 0.1055 | 6.0 | 3858 | 1.1377 | 0.8257 | 0.8275 | 0.8257 | 0.8039 | 0.7810 | 0.0154 | 0.0149 | 0.9775 | 0.9869 | 0.8257 | 0.7810 | 0.8257 | 0.7863 | 0.8246 |
| 0.0463 | 7.0 | 4501 | 1.3215 | 0.8071 | 0.8111 | 0.8071 | 0.7692 | 0.7689 | 0.0172 | 0.0168 | 0.9761 | 0.9856 | 0.8071 | 0.7689 | 0.8071 | 0.7661 | 0.8078 |
| 0.031 | 8.0 | 5144 | 1.3483 | 0.8203 | 0.8170 | 0.8203 | 0.7773 | 0.7727 | 0.0161 | 0.0154 | 0.9751 | 0.9864 | 0.8203 | 0.7727 | 0.8203 | 0.7690 | 0.8175 |
| 0.0202 | 9.0 | 5787 | 1.3730 | 0.8280 | 0.8263 | 0.8280 | 0.7818 | 0.7803 | 0.0152 | 0.0146 | 0.9779 | 0.9871 | 0.8280 | 0.7803 | 0.8280 | 0.7753 | 0.8256 |
| 0.0133 | 10.0 | 6430 | 1.5407 | 0.8164 | 0.8163 | 0.8164 | 0.7688 | 0.7779 | 0.0165 | 0.0158 | 0.9751 | 0.9861 | 0.8164 | 0.7779 | 0.8164 | 0.7655 | 0.8135 |
| 0.0051 | 11.0 | 7073 | 1.5235 | 0.8226 | 0.8265 | 0.8226 | 0.7900 | 0.7680 | 0.0156 | 0.0152 | 0.9769 | 0.9866 | 0.8226 | 0.7680 | 0.8226 | 0.7744 | 0.8234 |
| 0.0027 | 12.0 | 7716 | 1.5643 | 0.8265 | 0.8259 | 0.8265 | 0.7805 | 0.7841 | 0.0154 | 0.0148 | 0.9772 | 0.9869 | 0.8265 | 0.7841 | 0.8265 | 0.7775 | 0.8245 |
| 0.002 | 13.0 | 8359 | 1.5516 | 0.8280 | 0.8273 | 0.8280 | 0.7882 | 0.7902 | 0.0152 | 0.0146 | 0.9779 | 0.9871 | 0.8280 | 0.7902 | 0.8280 | 0.7860 | 0.8262 |
| 0.0015 | 14.0 | 9002 | 1.5835 | 0.8273 | 0.8268 | 0.8273 | 0.7943 | 0.8022 | 0.0153 | 0.0147 | 0.9773 | 0.9870 | 0.8273 | 0.8022 | 0.8273 | 0.7943 | 0.8259 |
| 0.0007 | 15.0 | 9645 | 1.5914 | 0.8296 | 0.8293 | 0.8296 | 0.7959 | 0.8029 | 0.0150 | 0.0145 | 0.9774 | 0.9871 | 0.8296 | 0.8029 | 0.8296 | 0.7954 | 0.8283 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall"], "base_model": "law-ai/InLegalBERT", "model-index": [{"name": "InLegalBERT", "results": []}]}
|
xshubhamx/InLegalBERT
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:law-ai/InLegalBERT",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:27:21+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-law-ai/InLegalBERT #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Metrics
-------
* loss: 1.0342
* accuracy: 0.8359
* precision: 0.8409
* recall: 0.8359
* precision\_macro: 0.8136
* recall\_macro: 0.8000
* macro\_fpr: 0.0142
* weighted\_fpr: 0.0138
* weighted\_specificity: 0.9792
* macro\_specificity: 0.9877
* weighted\_sensitivity: 0.8359
* macro\_sensitivity: 0.8000
* f1\_micro: 0.8359
* f1\_macro: 0.8010
* f1\_weighted: 0.8352
* runtime: 19.9583
* samples\_per\_second: 64.4340
* steps\_per\_second: 8.0670
Metrics
-------
* loss: 1.0345
* accuracy: 0.8358
* precision: 0.8408
* recall: 0.8358
* precision\_macro: 0.8207
* recall\_macro: 0.7957
* macro\_fpr: 0.0143
* weighted\_fpr: 0.0138
* weighted\_specificity: 0.9790
* macro\_specificity: 0.9877
* weighted\_sensitivity: 0.8358
* macro\_sensitivity: 0.7957
* f1\_micro: 0.8358
* f1\_macro: 0.8020
* f1\_weighted: 0.8352
* runtime: 22.0569
* samples\_per\_second: 58.5300
* steps\_per\_second: 7.3450
InLegalBERT
===========
This model is a fine-tuned version of law-ai/InLegalBERT on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0763
* Accuracy: 0.8304
* Precision: 0.8363
* Recall: 0.8304
* Precision Macro: 0.7959
* Recall Macro: 0.8029
* Macro Fpr: 0.0150
* Weighted Fpr: 0.0145
* Weighted Specificity: 0.9774
* Macro Specificity: 0.9871
* Weighted Sensitivity: 0.8296
* Macro Sensitivity: 0.8029
* F1 Micro: 0.8296
* F1 Macro: 0.7954
* F1 Weighted: 0.8283
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.1.2
* Datasets 2.1.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-law-ai/InLegalBERT #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "code-llama-7b-text-to-sql", "results": []}]}
|
patrickfleith/code-llama-7b-text-to-sql
| null |
[
"peft",
"tensorboard",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null |
2024-04-12T19:28:42+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #llama #trl #sft #generated_from_trainer #dataset-generator #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us
|
# code-llama-7b-text-to-sql
This model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
[
"# code-llama-7b-text-to-sql\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #llama #trl #sft #generated_from_trainer #dataset-generator #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us \n",
"# code-llama-7b-text-to-sql\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "camembert-base", "model-index": [{"name": "dummy-model", "results": []}]}
|
Mouwiya/dummy-model
| null |
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"base_model:camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:30:26+00:00
|
[] |
[] |
TAGS
#transformers #tf #camembert #fill-mask #generated_from_keras_callback #base_model-camembert-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# dummy-model
This model is a fine-tuned version of camembert-base on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Tokenizers 0.15.2
|
[
"# dummy-model\n\nThis model is a fine-tuned version of camembert-base on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- TensorFlow 2.15.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tf #camembert #fill-mask #generated_from_keras_callback #base_model-camembert-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# dummy-model\n\nThis model is a fine-tuned version of camembert-base on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- TensorFlow 2.15.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Multi_verse_modelInex12-7B
Multi_verse_modelInex12-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model)
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MTSAIR/multi_verse_model
layer_range: [0, 32]
- model: MSL7/INEX12-7b
layer_range: [0, 32]
merge_method: slerp
base_model: MTSAIR/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Multi_verse_modelInex12-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["MTSAIR/multi_verse_model", "MSL7/INEX12-7b"]}
|
automerger/Multi_verse_modelInex12-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"conversational",
"base_model:MTSAIR/multi_verse_model",
"base_model:MSL7/INEX12-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:33:09+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #conversational #base_model-MTSAIR/multi_verse_model #base_model-MSL7/INEX12-7b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Multi_verse_modelInex12-7B
Multi_verse_modelInex12-7B is an automated merge created by Maxime Labonne using the following configuration.
* MTSAIR/multi_verse_model
* MSL7/INEX12-7b
## Configuration
## Usage
|
[
"# Multi_verse_modelInex12-7B\n\nMulti_verse_modelInex12-7B is an automated merge created by Maxime Labonne using the following configuration.\n* MTSAIR/multi_verse_model\n* MSL7/INEX12-7b",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #conversational #base_model-MTSAIR/multi_verse_model #base_model-MSL7/INEX12-7b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Multi_verse_modelInex12-7B\n\nMulti_verse_modelInex12-7B is an automated merge created by Maxime Labonne using the following configuration.\n* MTSAIR/multi_verse_model\n* MSL7/INEX12-7b",
"## Configuration",
"## Usage"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the `llm_eval` library and list the results below:
| **Repository (Qwen Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------|:------------:|:------------:|:-----------:|:-------------:|:-------------:|:-----------:|:----------:|:-----------:|:-----------:|:-------------:|:-------------:|:-------------:|:---------:|
| `Qwen-1.5-0.5B-layer-mix-bpw-2.2` | 0.398 | 0.170 | 0.443 | 0.527 | 0.332 | 0.238 | 0.634 | 0.620 | 0.318 | 0.332 | 0.338 | 0.330 | 0.500 |
| `Qwen-1.5-0.5B-layer-mix-bpw-2.5` | 0.394 | 0.170 | 0.514 | 0.541 | 0.337 | 0.232 | 0.637 | 0.496 | 0.318 | 0.316 | 0.358 | 0.326 | 0.490 |
| `Qwen-1.5-0.5B-layer-mix-bpw-3.0` | 0.407 | 0.198 | 0.533 | 0.536 | 0.348 | 0.234 | 0.671 | 0.552 | 0.323 | 0.330 | 0.333 | 0.335 | 0.495 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.2` | 0.415 | 0.218 | 0.539 | 0.586 | 0.392 | 0.260 | 0.678 | 0.622 | 0.333 | 0.333 | 0.333 | 0.336 | 0.464 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.5` | 0.423 | 0.222 | 0.592 | 0.585 | 0.406 | 0.267 | 0.695 | 0.629 | 0.336 | 0.314 | 0.339 | 0.361 | 0.507 |
| `Qwen-1.5-1.8B-layer-mix-bpw-3.0` | 0.438 | 0.246 | 0.576 | 0.563 | 0.413 | 0.277 | 0.694 | 0.645 | 0.352 | 0.323 | 0.336 | 0.343 | 0.492 |
| `Qwen-1.5-4B-layer-mix-bpw-2.2` | 0.480 | 0.254 | 0.663 | 0.623 | 0.463 | 0.339 | 0.712 | 0.718 | 0.349 | 0.326 | 0.355 | 0.384 | 0.513 |
| `Qwen-1.5-4B-layer-mix-bpw-2.5` | 0.490 | 0.266 | 0.677 | 0.629 | 0.473 | 0.365 | 0.732 | 0.717 | 0.351 | 0.372 | 0.352 | 0.360 | 0.502 |
| `Qwen-1.5-4B-layer-mix-bpw-3.0` | 0.502 | 0.268 | 0.678 | 0.642 | 0.494 | 0.358 | 0.755 | 0.757 | 0.380 | 0.395 | 0.395 | 0.392 | 0.519 |
| `Qwen-1.5-7B-layer-mix-bpw-2.2` | 0.513 | 0.278 | 0.669 | 0.654 | 0.504 | 0.389 | 0.741 | 0.759 | 0.376 | 0.383 | 0.410 | 0.403 | 0.517 |
| `Qwen-1.5-7B-layer-mix-bpw-2.5` | 0.520 | 0.294 | 0.705 | 0.650 | 0.520 | 0.387 | 0.750 | 0.769 | 0.371 | 0.445 | 0.424 | 0.398 | 0.564 |
| `Qwen-1.5-7B-layer-mix-bpw-3.0` | 0.531 | 0.292 | 0.713 | 0.654 | 0.545 | 0.405 | 0.764 | 0.807 | 0.383 | 0.424 | 0.393 | 0.414 | 0.627 |
| `Qwen-1.5-14B-layer-mix-bpw-2.5` | 0.553 | 0.318 | 0.727 | 0.682 | 0.564 | 0.413 | 0.775 | 0.792 | 0.390 | 0.472 | 0.434 | 0.446 | 0.623 |
| `Qwen-1.5-32B-layer-mix-bpw-3.0` | 0.599 | 0.346 | 0.775 | 0.722 | 0.620 | 0.492 | 0.807 | 0.853 | 0.444 | 0.515 | 0.494 | 0.478 | 0.642 |
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-3.0
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:34:08+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\_eval' library and list the results below:
|
[
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MLMA_biogpt_tuned_1412
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1566
- Precision: 0.4828
- Recall: 0.5685
- F1: 0.5222
- Accuracy: 0.9542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.329 | 1.0 | 679 | 0.1783 | 0.375 | 0.5154 | 0.4341 | 0.9407 |
| 0.1756 | 2.0 | 1358 | 0.1703 | 0.4751 | 0.5521 | 0.5107 | 0.9520 |
| 0.1045 | 3.0 | 2037 | 0.1566 | 0.4828 | 0.5685 | 0.5222 | 0.9542 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/biogpt", "model-index": [{"name": "MLMA_biogpt_tuned_1412", "results": []}]}
|
wins1502/MLMA_biogpt_tuned_1412
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:37:35+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
MLMA\_biogpt\_tuned\_1412
=========================
This model is a fine-tuned version of microsoft/biogpt on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1566
* Precision: 0.4828
* Recall: 0.5685
* F1: 0.5222
* Accuracy: 0.9542
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["trl", "sft"]}
|
AsphyXIA/gemma-g2
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:38:32+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
## Models
```
yolov3
yolov4
yolov4-tiny
yolov4-csp
```
## Training
Pretrianed from [AlexeyAB](https://github.com/AlexeyAB/darknet) and [pjreddie](https://pjreddie.com/darknet/yolo)
|
{"language": ["en"], "license": "mit", "tags": ["yolov4", "yolov3"]}
|
homohapiens/darknet-yolov4
| null |
[
"yolov4",
"yolov3",
"en",
"license:mit",
"region:us"
] | null |
2024-04-12T19:40:18+00:00
|
[] |
[
"en"
] |
TAGS
#yolov4 #yolov3 #en #license-mit #region-us
|
## Models
## Training
Pretrianed from AlexeyAB and pjreddie
|
[
"## Models",
"## Training\nPretrianed from AlexeyAB and pjreddie"
] |
[
"TAGS\n#yolov4 #yolov3 #en #license-mit #region-us \n",
"## Models",
"## Training\nPretrianed from AlexeyAB and pjreddie"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/B_adapter_seq_bn_pretraining_P_10` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_split_25M_reviews_20_percent_condensed](https://huggingface.co/datasets/BigTMiami/amazon_split_25M_reviews_20_percent_condensed/) dataset and includes a prediction head for masked lm.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/B_adapter_seq_bn_pretraining_P_10", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_split_25M_reviews_20_percent_condensed"]}
|
BigTMiami/B_adapter_seq_bn_pretraining_P_10
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_split_25M_reviews_20_percent_condensed",
"region:us"
] | null |
2024-04-12T19:40:28+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us
|
# Adapter 'BigTMiami/B_adapter_seq_bn_pretraining_P_10' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/B_adapter_seq_bn_pretraining_P_10' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us \n",
"# Adapter 'BigTMiami/B_adapter_seq_bn_pretraining_P_10' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
pgrimes/woodman-gpt
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:40:48+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1404
- Precision: 0.4590
- Recall: 0.5400
- F1: 0.4962
- Accuracy: 0.9579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3225 | 1.0 | 679 | 0.1673 | 0.3285 | 0.4041 | 0.3624 | 0.9473 |
| 0.1696 | 2.0 | 1358 | 0.1510 | 0.4053 | 0.5464 | 0.4654 | 0.9532 |
| 0.0986 | 3.0 | 2037 | 0.1404 | 0.4590 | 0.5400 | 0.4962 | 0.9579 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "biogpt", "results": []}]}
|
ywu289/biogpt
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:41:22+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
biogpt
======
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1404
* Precision: 0.4590
* Recall: 0.5400
* F1: 0.4962
* Accuracy: 0.9579
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_idpo_same_6iters_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0001_idpo_same_6iters_iter_1", "results": []}]}
|
ShenaoZ/0.0001_idpo_same_6iters_iter_1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:45:32+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.0001_idpo_same_6iters_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
[
"# 0.0001_idpo_same_6iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.0001_idpo_same_6iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
text-generation
|
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "pipeline_tag": "text-generation", "model-index": [{"name": "mistralFT", "results": []}]}
|
Yash0109/mistralFT
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"text-generation",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T19:49:23+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #text-generation #conversational #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
|
# mistralFT
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# mistralFT\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 10",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #text-generation #conversational #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"# mistralFT\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 10",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
reinforcement-learning
|
stable-baselines3
|
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga davmel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga davmel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga davmel
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 300000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "232.00 +/- 47.81", "name": "mean_reward", "verified": false}]}]}]}
|
davmel/dqn-SpaceInvadersNoFrameskip-v4
| null |
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-12T19:50:20+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# DQN Agent playing SpaceInvadersNoFrameskip-v4
This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4
using the stable-baselines3 library
and the RL Zoo.
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: URL
SB3: URL
SB3 Contrib: URL
Install the RL Zoo (with SB3 and SB3-Contrib):
If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:
## Training (with the RL Zoo)
## Hyperparameters
# Environment Arguments
|
[
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] |
[
"TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/B_adapter_seq_bn_classification_C_10` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/B_adapter_seq_bn_classification_C_10", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness"]}
|
BigTMiami/B_adapter_seq_bn_classification_C_10
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null |
2024-04-12T19:50:46+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
|
# Adapter 'BigTMiami/B_adapter_seq_bn_classification_C_10' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/B_adapter_seq_bn_classification_C_10' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n",
"# Adapter 'BigTMiami/B_adapter_seq_bn_classification_C_10' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-1b-mz-ada-v3-bs-4
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-imdb-1b-mz-ada-v3-bs-4", "results": []}]}
|
AlignmentResearch/robust_llm_pythia-imdb-1b-mz-ada-v3-bs-4
| null |
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:51:39+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-imdb-1b-mz-ada-v3-bs-4
This model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# robust_llm_pythia-imdb-1b-mz-ada-v3-bs-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 4\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-imdb-1b-mz-ada-v3-bs-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 4\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vecvanilla_load_best
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- best model at 1400 steps, training loss 0.871600, validation loss 0.835038, evaluation WER 0.324867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.47 | 0.43 | 100 | 1.0582 | 0.4027 |
| 1.2584 | 0.85 | 200 | 0.9775 | 0.3719 |
| 1.1377 | 1.28 | 300 | 0.9759 | 0.3628 |
| 1.0955 | 1.71 | 400 | 1.1271 | 0.3626 |
| 1.0945 | 2.14 | 500 | 0.9063 | 0.3589 |
| 1.0567 | 2.56 | 600 | 0.9288 | 0.3564 |
| 1.033 | 2.99 | 700 | 1.1634 | 0.3522 |
| 1.0043 | 3.42 | 800 | 0.8396 | 0.3506 |
| 0.9776 | 3.85 | 900 | 0.8654 | 0.3437 |
| 0.9147 | 4.27 | 1000 | 0.8816 | 0.3362 |
| 0.9041 | 4.7 | 1100 | 0.8994 | 0.3303 |
| 0.8648 | 5.13 | 1200 | 0.8379 | 0.3361 |
| 0.8241 | 5.56 | 1300 | 0.8263 | 0.3292 |
| 0.8716 | 5.98 | 1400 | 0.8350 | 0.3249 |
| 0.8218 | 6.41 | 1500 | nan | 1.0 |
| 0.0 | 6.84 | 1600 | nan | 1.0 |
| 0.0 | 7.26 | 1700 | nan | 1.0 |
| 0.0 | 7.69 | 1800 | nan | 1.0 |
| 0.0 | 8.12 | 1900 | nan | 1.0 |
| 0.0 | 8.55 | 2000 | nan | 1.0 |
| 0.0 | 8.97 | 2100 | nan | 1.0 |
| 0.0 | 9.4 | 2200 | nan | 1.0 |
| 0.0 | 9.83 | 2300 | nan | 1.0 |
| 0.0 | 10.26 | 2400 | nan | 1.0 |
| 0.0 | 10.68 | 2500 | nan | 1.0 |
| 0.0 | 11.11 | 2600 | nan | 1.0 |
| 0.0 | 11.54 | 2700 | nan | 1.0 |
| 0.0 | 11.97 | 2800 | nan | 1.0 |
| 0.0 | 12.39 | 2900 | nan | 1.0 |
| 0.0 | 12.82 | 3000 | nan | 1.0 |
| 0.0 | 13.25 | 3100 | nan | 1.0 |
| 0.0 | 13.68 | 3200 | nan | 1.0 |
| 0.0 | 14.1 | 3300 | nan | 1.0 |
| 0.0 | 14.53 | 3400 | nan | 1.0 |
| 0.0 | 14.96 | 3500 | nan | 1.0 |
| 0.0 | 15.38 | 3600 | nan | 1.0 |
| 0.0 | 15.81 | 3700 | nan | 1.0 |
| 0.0 | 16.24 | 3800 | nan | 1.0 |
| 0.0 | 16.67 | 3900 | nan | 1.0 |
| 0.0 | 17.09 | 4000 | nan | 1.0 |
| 0.0 | 17.52 | 4100 | nan | 1.0 |
| 0.0 | 17.95 | 4200 | nan | 1.0 |
| 0.0 | 18.38 | 4300 | nan | 1.0 |
| 0.0 | 18.8 | 4400 | nan | 1.0 |
| 0.0 | 19.23 | 4500 | nan | 1.0 |
| 0.0 | 19.66 | 4600 | nan | 1.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-base-960h", "model-index": [{"name": "wav2vecvanilla_load_best", "results": []}]}
|
charris/wav2vecvanilla_load_best
| null |
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:52:11+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vecvanilla\_load\_best
==========================
This model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset.
It achieves the following results on the evaluation set:
* best model at 1400 steps, training loss 0.871600, validation loss 0.835038, evaluation WER 0.324867
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2328
- Accuracy: 0.9304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2242 | 1.0 | 1563 | 0.2034 | 0.9219 |
| 0.1456 | 2.0 | 3126 | 0.2328 | 0.9304 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "my_awesome_model", "results": []}]}
|
Mou11209203/my_awesome_model
| null |
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T19:55:29+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
my\_awesome\_model
==================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2328
* Accuracy: 0.9304
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_dataup_noreplacerej_40g_bs2_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0_dataup_noreplacerej_40g_bs2_iter_1", "results": []}]}
|
ZhangShenao/0.0_dataup_noreplacerej_40g_bs2_iter_1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T19:58:19+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.0_dataup_noreplacerej_40g_bs2_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
[
"# 0.0_dataup_noreplacerej_40g_bs2_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 128\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.0_dataup_noreplacerej_40g_bs2_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 128\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | null |
GGUF-based Quantized models from [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh)
Rest of this readme is a template README.md derived from TheBloke's README
<!-- description start -->
## Description
This repo contains GGUF format model files for [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- prompt-template start -->
The model uses Vicuna chat template:
```
USER: ...
ASSISTANT: ...
```
or:
```
SYSTEM: ...
USER: ...
ASSISTANT: ...
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from March 19th onwards, as of commit [d0d5de4](https://github.com/ggerganov/llama.cpp/commit/d0d5de42e5a65865b5fddb6f5c785083539b74c3)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0d5de4](https://github.com/ggerganov/llama.cpp/commit/d0d5de42e5a65865b5fddb6f5c785083539b74c3)
```shell
./main -ngl 35 -m mixtral-8x22b-v0.1-instruct-oh-Q8_0-00001-of-00004.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Note that we are loading the GGUF of 00001-of-XXXXX. Using the latest llama.cpp will load the rest of the parts.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./mixtral-8x22b-v0.1-instruct-oh-Q8_0-00001-of-00004.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./mixtral-8x22b-v0.1-instruct-oh-Q8_0-00001-of-00004.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
# Original Model Card for Fireworks Mixtral 8x22b Instruct OH
Fireworks Mixtral 8x22b Instruct OH is an instruct version of the latest MoE model from [mistralai](https://huggingface.co/mistralai) - [8x22b](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1). This
model was finetuned on ~10K entries from [OpenHermes](https://huggingface.co/datasets/teknium/OpenHermes-2.5) dataset by [NousResearch](https://huggingface.co/NousResearch). Huge
shoutout to [Teknium](https://huggingface.co/teknium) and the NousResearch team for this high-quality SFT dataset.
Try out the model on the [Fireworks platform](https://fireworks.ai/models/fireworks/mixtral-8x22b-instruct-preview).
## How to Get Started with the Model
For trying out the model on a hosted platform go [here](https://fireworks.ai/models/fireworks/mixtral-8x22b-instruct-preview).
The model uses Vicuna chat template:
```
USER: ...
ASSISTANT: ...
```
or:
```
SYSTEM: ...
USER: ...
ASSISTANT: ...
```
### Training Data
To save training time, we sample & format 10K samples from [OpenHermes 2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) dataset. We perform reformatting so as to
suit the vicuna chat template. Apart from that dataset is used as is.
#### Training Setup
The model was trained on 8xA100 [80G] using Deepspeed Zero 3. We had to fix an existing bug with deepspeed zero 3 where interaction between zero 3 & MoE causes
training to get stuck.
## Model Examination
<details Sample 1>
<summary>Itinerary Question</summary>
<code>
SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.
USER: Create a travel itinerary for the bahamas. Spare no detail.
ASSISTANT: Sure, I'd be happy to help you plan a trip to the Bahamas! Here's a detailed itinerary for your consideration:
Day 1: Arrival in Nassau
- Arrive at Lynden Pindling International Airport in Nassau, the capital city of the Bahamas.
- Check into your hotel or resort.
- Spend the afternoon exploring the city, visiting places like the Queen's Staircase, Fort Fincastle, and the Straw Market.
- Enjoy a delicious Bahamian dinner at one of the local restaurants.
Day 2: Nassau City Tour and Atlantis Resort
- Start your day with a guided city tour of Nassau, visiting historical sites like the Government House, Parliament Square, and the Pirates of Nassau Museum.
- In the afternoon, head to the famous Atlantis Resort on Paradise Island. Explore the resort's attractions, such as the Aquaventure Water Park, the Marine Habitat, and the casino.
- End the day with a relaxing dinner at one of the resort's restaurants.
Day 3: Exuma Cays Day Trip
- Take a day trip to the Exuma Cays, a group of over 365 islands and cays.
- Visit popular spots like the Swimming Pigs at Big Major Cay, the Thunderball Grotto (featured in the James Bond movie "Thunderball"), and the stunning sandbars.
- Enjoy a picnic lunch on one of the pristine beaches.
- Return to Nassau in the evening.
Day 4: Blue Lagoon Island and Beach Day
- Spend the morning at Blue Lagoon Island, a private island just a short boat ride from Nassau.
- Enjoy activities like swimming, snorkeling, kayaking, or simply relaxing on the beach.
- In the afternoon, head back to Nassau and spend some time at one of the city's beautiful beaches, such as Cable Beach or Junkanoo Beach.
- Have dinner at a local seafood restaurant.
Day 5: Departure
- Depending on your flight schedule, you may have some time to explore Nassau further or do some last-minute shopping before heading to the airport for your departure.
I hope this itinerary helps you plan your trip to the Bahamas! Let me know if you have any other questions or need further assistance.
</code>
</details>
<details Sample 2>
<summary>QnA Question</summary>
<code>
SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.
USER: What is the medical term for strong headache on one side of the head?
ASSISTANT: The medical term for a strong headache on one side of the head is called a migraine.
</code>
</details>
<details Sample 3>
<summary>Knowledge Question</summary>
<code>
SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.
USER: How can I get stock price of stripe?
ASSISTANT: Stripe is a private company and does not have publicly traded stock. Therefore, you cannot get the stock price for Stripe.
</code>
</details>
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["teknium/OpenHermes-2.5"], "base_model": "mistral-community/Mixtral-8x22B-v0.1"}
|
failspy/mixtral-8x22b-v0.1-instruct-oh-GGUF
| null |
[
"gguf",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T19:59:46+00:00
|
[] |
[
"en"
] |
TAGS
#gguf #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #region-us
|
GGUF-based Quantized models from fireworks-ai/mixtral-8x22b-instruct-oh
Rest of this readme is a template URL derived from TheBloke's README
## Description
This repo contains GGUF format model files for fireworks-ai/mixtral-8x22b-instruct-oh.
### About GGUF
GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* URL. The source project for GGUF. Offers a CLI and a server option.
* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
The model uses Vicuna chat template:
or:
## Compatibility
These quantised GGUFv2 files are compatible with URL from March 19th onwards, as of commit d0d5de4
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
## Example 'URL' command
Make sure you are using 'URL' from commit d0d5de4
Note that we are loading the GGUF of 00001-of-XXXXX. Using the latest URL will load the rest of the parts.
Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'
For other parameters and how to use them, please refer to the URL documentation
## How to run in 'text-generation-webui'
Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.
## How to run from Python code
You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: llama-cpp-python docs.
#### First install the package
Run one of the following commands, according to your system:
#### Simple llama-cpp-python example code
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* LangChain + llama-cpp-python
* LangChain + ctransformers
# Original Model Card for Fireworks Mixtral 8x22b Instruct OH
Fireworks Mixtral 8x22b Instruct OH is an instruct version of the latest MoE model from mistralai - 8x22b. This
model was finetuned on ~10K entries from OpenHermes dataset by NousResearch. Huge
shoutout to Teknium and the NousResearch team for this high-quality SFT dataset.
Try out the model on the Fireworks platform.
## How to Get Started with the Model
For trying out the model on a hosted platform go here.
The model uses Vicuna chat template:
or:
### Training Data
To save training time, we sample & format 10K samples from OpenHermes 2.5 dataset. We perform reformatting so as to
suit the vicuna chat template. Apart from that dataset is used as is.
#### Training Setup
The model was trained on 8xA100 [80G] using Deepspeed Zero 3. We had to fix an existing bug with deepspeed zero 3 where interaction between zero 3 & MoE causes
training to get stuck.
## Model Examination
<details Sample 1>
<summary>Itinerary Question</summary>
<code>
SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.
USER: Create a travel itinerary for the bahamas. Spare no detail.
ASSISTANT: Sure, I'd be happy to help you plan a trip to the Bahamas! Here's a detailed itinerary for your consideration:
Day 1: Arrival in Nassau
- Arrive at Lynden Pindling International Airport in Nassau, the capital city of the Bahamas.
- Check into your hotel or resort.
- Spend the afternoon exploring the city, visiting places like the Queen's Staircase, Fort Fincastle, and the Straw Market.
- Enjoy a delicious Bahamian dinner at one of the local restaurants.
Day 2: Nassau City Tour and Atlantis Resort
- Start your day with a guided city tour of Nassau, visiting historical sites like the Government House, Parliament Square, and the Pirates of Nassau Museum.
- In the afternoon, head to the famous Atlantis Resort on Paradise Island. Explore the resort's attractions, such as the Aquaventure Water Park, the Marine Habitat, and the casino.
- End the day with a relaxing dinner at one of the resort's restaurants.
Day 3: Exuma Cays Day Trip
- Take a day trip to the Exuma Cays, a group of over 365 islands and cays.
- Visit popular spots like the Swimming Pigs at Big Major Cay, the Thunderball Grotto (featured in the James Bond movie "Thunderball"), and the stunning sandbars.
- Enjoy a picnic lunch on one of the pristine beaches.
- Return to Nassau in the evening.
Day 4: Blue Lagoon Island and Beach Day
- Spend the morning at Blue Lagoon Island, a private island just a short boat ride from Nassau.
- Enjoy activities like swimming, snorkeling, kayaking, or simply relaxing on the beach.
- In the afternoon, head back to Nassau and spend some time at one of the city's beautiful beaches, such as Cable Beach or Junkanoo Beach.
- Have dinner at a local seafood restaurant.
Day 5: Departure
- Depending on your flight schedule, you may have some time to explore Nassau further or do some last-minute shopping before heading to the airport for your departure.
I hope this itinerary helps you plan your trip to the Bahamas! Let me know if you have any other questions or need further assistance.
</code>
</details>
<details Sample 2>
<summary>QnA Question</summary>
<code>
SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.
USER: What is the medical term for strong headache on one side of the head?
ASSISTANT: The medical term for a strong headache on one side of the head is called a migraine.
</code>
</details>
<details Sample 3>
<summary>Knowledge Question</summary>
<code>
SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.
USER: How can I get stock price of stripe?
ASSISTANT: Stripe is a private company and does not have publicly traded stock. Therefore, you cannot get the stock price for Stripe.
</code>
</details>
|
[
"## Description\n\nThis repo contains GGUF format model files for fireworks-ai/mixtral-8x22b-instruct-oh.",
"### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.\n\n\n\n\n\nThe model uses Vicuna chat template:\n\nor:",
"## Compatibility\n\nThese quantised GGUFv2 files are compatible with URL from March 19th onwards, as of commit d0d5de4\n\nThey are also compatible with many third party UIs and libraries - please see the list at the top of this README.",
"## Explanation of quantisation methods\n\n<details>\n <summary>Click to see details</summary>\n\nThe new methods available are:\n\n* GGML_TYPE_Q2_K - \"type-1\" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)\n* GGML_TYPE_Q3_K - \"type-0\" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.\n* GGML_TYPE_Q4_K - \"type-1\" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.\n* GGML_TYPE_Q5_K - \"type-1\" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw\n* GGML_TYPE_Q6_K - \"type-0\" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw\n\nRefer to the Provided Files table below to see what files use which methods, and how.\n</details>",
"## Example 'URL' command\n\nMake sure you are using 'URL' from commit d0d5de4\n\n\n\nNote that we are loading the GGUF of 00001-of-XXXXX. Using the latest URL will load the rest of the parts.\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\nChange '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\nIf you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'\n\nFor other parameters and how to use them, please refer to the URL documentation",
"## How to run in 'text-generation-webui'\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.",
"## How to run from Python code\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.",
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code",
"## How to use with LangChain\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers",
"# Original Model Card for Fireworks Mixtral 8x22b Instruct OH\n\nFireworks Mixtral 8x22b Instruct OH is an instruct version of the latest MoE model from mistralai - 8x22b. This\nmodel was finetuned on ~10K entries from OpenHermes dataset by NousResearch. Huge\nshoutout to Teknium and the NousResearch team for this high-quality SFT dataset.\n\nTry out the model on the Fireworks platform.",
"## How to Get Started with the Model\n\nFor trying out the model on a hosted platform go here.\n\nThe model uses Vicuna chat template:\n\nor:",
"### Training Data\n\nTo save training time, we sample & format 10K samples from OpenHermes 2.5 dataset. We perform reformatting so as to \nsuit the vicuna chat template. Apart from that dataset is used as is.",
"#### Training Setup\n\nThe model was trained on 8xA100 [80G] using Deepspeed Zero 3. We had to fix an existing bug with deepspeed zero 3 where interaction between zero 3 & MoE causes\ntraining to get stuck.",
"## Model Examination\n\n<details Sample 1>\n <summary>Itinerary Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n \n USER: Create a travel itinerary for the bahamas. Spare no detail.\n \n ASSISTANT: Sure, I'd be happy to help you plan a trip to the Bahamas! Here's a detailed itinerary for your consideration:\n \n Day 1: Arrival in Nassau\n - Arrive at Lynden Pindling International Airport in Nassau, the capital city of the Bahamas.\n - Check into your hotel or resort. \n - Spend the afternoon exploring the city, visiting places like the Queen's Staircase, Fort Fincastle, and the Straw Market.\n - Enjoy a delicious Bahamian dinner at one of the local restaurants.\n \n Day 2: Nassau City Tour and Atlantis Resort \n - Start your day with a guided city tour of Nassau, visiting historical sites like the Government House, Parliament Square, and the Pirates of Nassau Museum.\n - In the afternoon, head to the famous Atlantis Resort on Paradise Island. Explore the resort's attractions, such as the Aquaventure Water Park, the Marine Habitat, and the casino.\n - End the day with a relaxing dinner at one of the resort's restaurants. \n \n Day 3: Exuma Cays Day Trip\n - Take a day trip to the Exuma Cays, a group of over 365 islands and cays. \n - Visit popular spots like the Swimming Pigs at Big Major Cay, the Thunderball Grotto (featured in the James Bond movie \"Thunderball\"), and the stunning sandbars.\n - Enjoy a picnic lunch on one of the pristine beaches.\n - Return to Nassau in the evening. \n \n Day 4: Blue Lagoon Island and Beach Day\n - Spend the morning at Blue Lagoon Island, a private island just a short boat ride from Nassau.\n - Enjoy activities like swimming, snorkeling, kayaking, or simply relaxing on the beach.\n - In the afternoon, head back to Nassau and spend some time at one of the city's beautiful beaches, such as Cable Beach or Junkanoo Beach.\n - Have dinner at a local seafood restaurant.\n \n Day 5: Departure \n - Depending on your flight schedule, you may have some time to explore Nassau further or do some last-minute shopping before heading to the airport for your departure.\n \n I hope this itinerary helps you plan your trip to the Bahamas! Let me know if you have any other questions or need further assistance.\n </code>\n</details>\n\n<details Sample 2>\n <summary>QnA Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n\n USER: What is the medical term for strong headache on one side of the head?\n\n ASSISTANT: The medical term for a strong headache on one side of the head is called a migraine.\n </code>\n</details>\n\n<details Sample 3>\n <summary>Knowledge Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n \n USER: How can I get stock price of stripe?\n \n ASSISTANT: Stripe is a private company and does not have publicly traded stock. Therefore, you cannot get the stock price for Stripe.\n </code>\n</details>"
] |
[
"TAGS\n#gguf #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #region-us \n",
"## Description\n\nThis repo contains GGUF format model files for fireworks-ai/mixtral-8x22b-instruct-oh.",
"### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.\n\n\n\n\n\nThe model uses Vicuna chat template:\n\nor:",
"## Compatibility\n\nThese quantised GGUFv2 files are compatible with URL from March 19th onwards, as of commit d0d5de4\n\nThey are also compatible with many third party UIs and libraries - please see the list at the top of this README.",
"## Explanation of quantisation methods\n\n<details>\n <summary>Click to see details</summary>\n\nThe new methods available are:\n\n* GGML_TYPE_Q2_K - \"type-1\" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)\n* GGML_TYPE_Q3_K - \"type-0\" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.\n* GGML_TYPE_Q4_K - \"type-1\" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.\n* GGML_TYPE_Q5_K - \"type-1\" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw\n* GGML_TYPE_Q6_K - \"type-0\" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw\n\nRefer to the Provided Files table below to see what files use which methods, and how.\n</details>",
"## Example 'URL' command\n\nMake sure you are using 'URL' from commit d0d5de4\n\n\n\nNote that we are loading the GGUF of 00001-of-XXXXX. Using the latest URL will load the rest of the parts.\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\nChange '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\nIf you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'\n\nFor other parameters and how to use them, please refer to the URL documentation",
"## How to run in 'text-generation-webui'\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.",
"## How to run from Python code\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.",
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code",
"## How to use with LangChain\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers",
"# Original Model Card for Fireworks Mixtral 8x22b Instruct OH\n\nFireworks Mixtral 8x22b Instruct OH is an instruct version of the latest MoE model from mistralai - 8x22b. This\nmodel was finetuned on ~10K entries from OpenHermes dataset by NousResearch. Huge\nshoutout to Teknium and the NousResearch team for this high-quality SFT dataset.\n\nTry out the model on the Fireworks platform.",
"## How to Get Started with the Model\n\nFor trying out the model on a hosted platform go here.\n\nThe model uses Vicuna chat template:\n\nor:",
"### Training Data\n\nTo save training time, we sample & format 10K samples from OpenHermes 2.5 dataset. We perform reformatting so as to \nsuit the vicuna chat template. Apart from that dataset is used as is.",
"#### Training Setup\n\nThe model was trained on 8xA100 [80G] using Deepspeed Zero 3. We had to fix an existing bug with deepspeed zero 3 where interaction between zero 3 & MoE causes\ntraining to get stuck.",
"## Model Examination\n\n<details Sample 1>\n <summary>Itinerary Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n \n USER: Create a travel itinerary for the bahamas. Spare no detail.\n \n ASSISTANT: Sure, I'd be happy to help you plan a trip to the Bahamas! Here's a detailed itinerary for your consideration:\n \n Day 1: Arrival in Nassau\n - Arrive at Lynden Pindling International Airport in Nassau, the capital city of the Bahamas.\n - Check into your hotel or resort. \n - Spend the afternoon exploring the city, visiting places like the Queen's Staircase, Fort Fincastle, and the Straw Market.\n - Enjoy a delicious Bahamian dinner at one of the local restaurants.\n \n Day 2: Nassau City Tour and Atlantis Resort \n - Start your day with a guided city tour of Nassau, visiting historical sites like the Government House, Parliament Square, and the Pirates of Nassau Museum.\n - In the afternoon, head to the famous Atlantis Resort on Paradise Island. Explore the resort's attractions, such as the Aquaventure Water Park, the Marine Habitat, and the casino.\n - End the day with a relaxing dinner at one of the resort's restaurants. \n \n Day 3: Exuma Cays Day Trip\n - Take a day trip to the Exuma Cays, a group of over 365 islands and cays. \n - Visit popular spots like the Swimming Pigs at Big Major Cay, the Thunderball Grotto (featured in the James Bond movie \"Thunderball\"), and the stunning sandbars.\n - Enjoy a picnic lunch on one of the pristine beaches.\n - Return to Nassau in the evening. \n \n Day 4: Blue Lagoon Island and Beach Day\n - Spend the morning at Blue Lagoon Island, a private island just a short boat ride from Nassau.\n - Enjoy activities like swimming, snorkeling, kayaking, or simply relaxing on the beach.\n - In the afternoon, head back to Nassau and spend some time at one of the city's beautiful beaches, such as Cable Beach or Junkanoo Beach.\n - Have dinner at a local seafood restaurant.\n \n Day 5: Departure \n - Depending on your flight schedule, you may have some time to explore Nassau further or do some last-minute shopping before heading to the airport for your departure.\n \n I hope this itinerary helps you plan your trip to the Bahamas! Let me know if you have any other questions or need further assistance.\n </code>\n</details>\n\n<details Sample 2>\n <summary>QnA Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n\n USER: What is the medical term for strong headache on one side of the head?\n\n ASSISTANT: The medical term for a strong headache on one side of the head is called a migraine.\n </code>\n</details>\n\n<details Sample 3>\n <summary>Knowledge Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n \n USER: How can I get stock price of stripe?\n \n ASSISTANT: Stripe is a private company and does not have publicly traded stock. Therefore, you cannot get the stock price for Stripe.\n </code>\n</details>"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
domenicrosati/beavertails_attack_meta-llama_Llama-2-7b-chat-hf_8e-5_1k
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:03:15+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# LaserRMT
Fine tuned with LaserRMT on the top 5 layers
Original model from: [Unsloth](https://huggingface.co/unsloth/Starling-LM-7B-beta) with Self-Extend LLM Context Window.
The initial perplexity of the model is 12.196647644042969
Improved perplexity found: 11.843605995178223. Total modifications: 11
# Starling-LM-7B-beta
<!-- Provide a quick summary of what the model is/does. -->
- **Developed by: The Nexusflow Team (** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
- **Model type:** Language Model finetuned with RLHF / RLAIF
- **License:** Apache-2.0 license under the condition that the model is not used to compete with OpenAI
- **Finetuned from model:** [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with our new reward model [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) and policy optimization method [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593).
Harnessing the power of the ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the upgraded reward model, [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B), and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat-3.5-0106:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta")
model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
```
## License
The dataset, model and online demo is subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
## Acknowledgment
We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
## Citation
```
@misc{starling2023,
title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
url = {},
author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao},
month = {November},
year = {2023}
}
```
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["reward model", "RLHF", "RLAIF"], "datasets": ["berkeley-nest/Nectar"]}
|
ManniX-ITA/Starling-LM-7B-beta-LaserRMT-v1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"reward model",
"RLHF",
"RLAIF",
"conversational",
"en",
"dataset:berkeley-nest/Nectar",
"arxiv:1909.08593",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:04:20+00:00
|
[
"1909.08593"
] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #reward model #RLHF #RLAIF #conversational #en #dataset-berkeley-nest/Nectar #arxiv-1909.08593 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# LaserRMT
Fine tuned with LaserRMT on the top 5 layers
Original model from: Unsloth with Self-Extend LLM Context Window.
The initial perplexity of the model is 12.196647644042969
Improved perplexity found: 11.843605995178223. Total modifications: 11
# Starling-LM-7B-beta
- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
- Model type: Language Model finetuned with RLHF / RLAIF
- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI
- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).
Harnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
## Uses
Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.
Our model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.
In addition, our model is hosted on LMSYS Chatbot Arena for free test.
The conversation template is the same as Openchat-3.5-0106:
## Code Examples
## License
The dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.
## Acknowledgment
We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
|
[
"# LaserRMT\n\nFine tuned with LaserRMT on the top 5 layers\n\nOriginal model from: Unsloth with Self-Extend LLM Context Window.\n\nThe initial perplexity of the model is 12.196647644042969\n\nImproved perplexity found: 11.843605995178223. Total modifications: 11",
"# Starling-LM-7B-beta\n\n\n\n- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).\n- Model type: Language Model finetuned with RLHF / RLAIF\n- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI\n- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)\n \n\n\nWe introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).\nHarnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.",
"## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat-3.5-0106:",
"## Code Examples",
"## License\nThe dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.",
"## Acknowledgment\nWe would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT."
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #reward model #RLHF #RLAIF #conversational #en #dataset-berkeley-nest/Nectar #arxiv-1909.08593 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# LaserRMT\n\nFine tuned with LaserRMT on the top 5 layers\n\nOriginal model from: Unsloth with Self-Extend LLM Context Window.\n\nThe initial perplexity of the model is 12.196647644042969\n\nImproved perplexity found: 11.843605995178223. Total modifications: 11",
"# Starling-LM-7B-beta\n\n\n\n- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).\n- Model type: Language Model finetuned with RLHF / RLAIF\n- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI\n- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)\n \n\n\nWe introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).\nHarnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.",
"## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat-3.5-0106:",
"## Code Examples",
"## License\nThe dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.",
"## Acknowledgment\nWe would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT."
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="spietari/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
|
spietari/q-FrozenLake-v1-4x4-noSlippery
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-12T20:05:21+00:00
|
[] |
[] |
TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
model = load_from_hub(repo_id="spietari/q-FrozenLake-v1-4x4-noSlippery", filename="URL")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = URL(model["env_id"])
|
[
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"spietari/q-FrozenLake-v1-4x4-noSlippery\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])"
] |
[
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"spietari/q-FrozenLake-v1-4x4-noSlippery\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])"
] |
token-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Luciaaaaa/dummy-model
| null |
[
"transformers",
"safetensors",
"gpt2",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:05:58+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
OwOpeepeepoopoo/ummm6
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:13:28+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/B_adapter_seq_bn_classification_P_10_to_C_10` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/B_adapter_seq_bn_classification_P_10_to_C_10", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness"]}
|
BigTMiami/B_adapter_seq_bn_classification_P_10_to_C_10
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null |
2024-04-12T20:13:50+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
|
# Adapter 'BigTMiami/B_adapter_seq_bn_classification_P_10_to_C_10' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/B_adapter_seq_bn_classification_P_10_to_C_10' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n",
"# Adapter 'BigTMiami/B_adapter_seq_bn_classification_P_10_to_C_10' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** raosharjeel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
|
raosharjeel/tonysMatrixMistral
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:17:34+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: raosharjeel
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: raosharjeel\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: raosharjeel\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | null |
GGUF quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Yi-6B - GGUF
- Model creator: https://huggingface.co/01-ai/
- Original model: https://huggingface.co/01-ai/Yi-6B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Yi-6B.Q2_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q2_K.gguf) | Q2_K | 2.18GB |
| [Yi-6B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.IQ3_XS.gguf) | IQ3_XS | 2.41GB |
| [Yi-6B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.IQ3_S.gguf) | IQ3_S | 2.53GB |
| [Yi-6B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q3_K_S.gguf) | Q3_K_S | 2.52GB |
| [Yi-6B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.IQ3_M.gguf) | IQ3_M | 2.62GB |
| [Yi-6B.Q3_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q3_K.gguf) | Q3_K | 2.79GB |
| [Yi-6B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q3_K_M.gguf) | Q3_K_M | 2.79GB |
| [Yi-6B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q3_K_L.gguf) | Q3_K_L | 3.01GB |
| [Yi-6B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.IQ4_XS.gguf) | IQ4_XS | 3.11GB |
| [Yi-6B.Q4_0.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q4_0.gguf) | Q4_0 | 3.24GB |
| [Yi-6B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.IQ4_NL.gguf) | IQ4_NL | 3.27GB |
| [Yi-6B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q4_K_S.gguf) | Q4_K_S | 3.26GB |
| [Yi-6B.Q4_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q4_K.gguf) | Q4_K | 3.42GB |
| [Yi-6B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q4_K_M.gguf) | Q4_K_M | 3.42GB |
| [Yi-6B.Q4_1.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q4_1.gguf) | Q4_1 | 3.58GB |
| [Yi-6B.Q5_0.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q5_0.gguf) | Q5_0 | 3.92GB |
| [Yi-6B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q5_K_S.gguf) | Q5_K_S | 3.92GB |
| [Yi-6B.Q5_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q5_K.gguf) | Q5_K | 4.01GB |
| [Yi-6B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q5_K_M.gguf) | Q5_K_M | 4.01GB |
| [Yi-6B.Q5_1.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q5_1.gguf) | Q5_1 | 4.25GB |
| [Yi-6B.Q6_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-gguf/blob/main/Yi-6B.Q6_K.gguf) | Q6_K | 4.63GB |
Original model description:
---
license: other
license_name: yi-license
license_link: LICENSE
widget:
- example_title: "Yi-34B-Chat"
text: "hi"
output:
text: " Hello! How can I assist you today?"
- example_title: "Yi-34B"
text: "There's a place where time stands still. A place of breath taking wonder, but also"
output:
text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?"
pipeline_tag: text-generation
---
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px">
<img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg">
</picture>
</br>
</br>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml">
<img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg">
</a>
</div>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue">
</a>
</div>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">
<img src="https://img.shields.io/badge/Model_License-Yi_License-lightblue">
</a>
</div>
<div style="display: inline-block;">
<a href="mailto:[email protected]">
<img src="https://img.shields.io/badge/✉️[email protected]">
</a>
</div>
</div>
<div align="center">
<h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3>
</div>
<p align="center">
🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a>
</p>
<p align="center">
👩🚀 Ask questions or discuss ideas on <a href="01-ai/Yi · Discussions" target="_blank"> GitHub </a>
</p>
<p align="center">
👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a>
</p>
<p align="center">
📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a>
</p>
<p align="center">
📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a>
</p>
<!-- DO NOT REMOVE ME -->
<hr>
<details open>
<summary></b>📕 Table of Contents</b></summary>
- [What is Yi?](#what-is-yi)
- [Introduction](#introduction)
- [Models](#models)
- [Chat models](#chat-models)
- [Base models](#base-models)
- [Model info](#model-info)
- [News](#news)
- [How to use Yi?](#how-to-use-yi)
- [Quick start](#quick-start)
- [Choose your path](#choose-your-path)
- [pip](#quick-start---pip)
- [docker](#quick-start---docker)
- [llama.cpp](#quick-start---llamacpp)
- [conda-lock](#quick-start---conda-lock)
- [Web demo](#web-demo)
- [Fine-tuning](#fine-tuning)
- [Quantization](#quantization)
- [Deployment](#deployment)
- [Learning hub](#learning-hub)
- [Why Yi?](#why-yi)
- [Ecosystem](#ecosystem)
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
- [Benchmarks](#benchmarks)
- [Base model performance](#base-model-performance)
- [Chat model performance](#chat-model-performance)
- [Tech report](#tech-report)
- [Citation](#citation)
- [Who can use Yi?](#who-can-use-yi)
- [Misc.](#misc)
- [Acknowledgements](#acknowledgments)
- [Disclaimer](#disclaimer)
- [License](#license)
</details>
<hr>
# What is Yi?
## Introduction
- 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/).
- 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
- Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).
- Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).
- 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.
<details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br>
> 💡 TL;DR
>
> The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama.
- Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.
- Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
- Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.
- However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.
- As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.
- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/).
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## News
<details>
<summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary>
</details>
<details open>
<summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary>
</details>
<details open>
<summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>
<br>
In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance.
</details>
<details open>
<summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary>
<br>
<code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.
</details>
<details open>
<summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary>
<br>
<code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li>
</details>
<details>
<summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary>
<br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.
- `Yi-34B-Chat`
- `Yi-34B-Chat-4bits`
- `Yi-34B-Chat-8bits`
- `Yi-6B-Chat`
- `Yi-6B-Chat-4bits`
- `Yi-6B-Chat-8bits`
You can try some of them interactively at:
- [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Replicate](https://replicate.com/01-ai)
</details>
<details>
<summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary>
</details>
<details>
<summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary>
<br>Application form:
- [English](https://cn.mikecrm.com/l91ODJf)
- [Chinese](https://cn.mikecrm.com/gnEZjiQ)
</details>
<details>
<summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary>
<br>This release contains two base models with the same parameter sizes as the previous
release, except that the context window is extended to 200K.
</details>
<details>
<summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary>
<br>The first public release contains two bilingual (English/Chinese) base models
with the parameter sizes of 6B and 34B. Both of them are trained with 4K
sequence length and can be extended to 32K during inference time.
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## Models
Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.
If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment).
### Chat models
| Model | Download
|---|---
Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary)
Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary)
Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary)
Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary)
Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary)
Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary)
<sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub>
### Base models
| Model | Download |
|---|---|
Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary)
Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary)
Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B)
Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K)
Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary)
Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary)
<sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub>
### Model info
- For chat and base models
<table>
<thead>
<tr>
<th>Model</th>
<th>Intro</th>
<th>Default context window</th>
<th>Pretrained tokens</th>
<th>Training Data Date</th>
</tr>
</thead>
<tbody><tr>
<td>6B series models</td>
<td>They are suitable for personal and academic use.</td>
<td rowspan="3">4K</td>
<td>3T</td>
<td rowspan="3">Up to June 2023</td>
</tr>
<tr>
<td>9B series models</td>
<td>It is the best at coding and math in the Yi series models.</td>
<td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td>
</tr>
<tr>
<td>34B series models</td>
<td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability.</td>
<td>3T</td>
</tr>
</tbody></table>
- For chat models
<details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary>
<ul>
<br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.
<br>However, this higher diversity might amplify certain existing issues, including:
<li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li>
<li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li>
<li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li>
<li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li>
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# How to use Yi?
- [Quick start](#quick-start)
- [Choose your path](#choose-your-path)
- [pip](#quick-start---pip)
- [docker](#quick-start---docker)
- [conda-lock](#quick-start---conda-lock)
- [llama.cpp](#quick-start---llamacpp)
- [Web demo](#web-demo)
- [Fine-tuning](#fine-tuning)
- [Quantization](#quantization)
- [Deployment](#deployment)
- [Learning hub](#learning-hub)
## Quick start
Getting up and running with Yi models is simple with multiple choices available.
### Choose your path
Select one of the following paths to begin your journey with Yi!

#### 🎯 Deploy Yi locally
If you prefer to deploy Yi models locally,
- 🙋♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods:
- [pip](#quick-start---pip)
- [Docker](#quick-start---docker)
- [conda-lock](#quick-start---conda-lock)
- 🙋♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp).
#### 🎯 Not to deploy Yi locally
If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options.
##### 🙋♀️ Run Yi with APIs
If you want to explore more features of Yi, you can adopt one of these methods:
- Yi APIs (Yi official)
- [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access!
- [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate)
##### 🙋♀️ Run Yi in playground
If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options:
- [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official)
- Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)).
- [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate)
##### 🙋♀️ Chat with Yi
If you want to chat with Yi, you can use one of these online services, which offer a similar user experience:
- [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face)
- No registration is required.
- [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta)
- Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)).
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quick start - pip
This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference.
#### Step 0: Prerequisites
- Make sure Python 3.10 or a later version is installed.
- If you want to run other Yi models, see [software and hardware requirements](#deployment).
#### Step 1: Prepare your environment
To set up the environment and install the required packages, execute the following command.
```bash
git clone https://github.com/01-ai/Yi.git
cd yi
pip install -r requirements.txt
```
#### Step 2: Download the Yi model
You can download the weights and tokenizer of Yi models from the following sources:
- [Hugging Face](https://huggingface.co/01-ai)
- [ModelScope](https://www.modelscope.cn/organization/01ai/)
- [WiseModel](https://wisemodel.cn/organization/01.AI)
#### Step 3: Perform inference
You can perform inference with Yi chat or base models as below.
##### Perform inference with Yi chat model
1. Create a file named `quick_start.py` and copy the following content to it.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = '<your-model-path>'
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
# Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM.
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
2. Run `quick_start.py`.
```bash
python quick_start.py
```
Then you can see an output similar to the one below. 🥳
```bash
Hello! How can I assist you today?
```
##### Perform inference with Yi base model
- Yi-34B
The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model).
You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo).
```bash
python demo/text_generation.py --model <your-model-path>
```
Then you can see an output similar to the one below. 🥳
<details>
<summary>Output. ⬇️ </summary>
<br>
**Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry,
**Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up...
</details>
- Yi-9B
Input
```bash
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_DIR = "01-ai/Yi-9B"
model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False)
input_text = "# write the quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Output
```bash
# write the quick sort algorithm
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
# test the quick sort algorithm
print(quick_sort([3, 6, 8, 10, 1, 2, 1]))
```
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quick start - Docker
<details>
<summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary>
<br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference.
<h4>Step 0: Prerequisites</h4>
<p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p>
<h4> Step 1: Start Docker </h4>
<pre><code>docker run -it --gpus all \
-v <your-model-path>: /models
ghcr.io/01-ai/yi:latest
</code></pre>
<p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p>
<h4>Step 2: Perform inference</h4>
<p>You can perform inference with Yi chat or base models as below.</p>
<h5>Perform inference with Yi chat model</h5>
<p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p>
<p><strong>Note</strong> that the only difference is to set <code>model_path = '<your-model-mount-path>'</code> instead of <code>model_path = '<your-model-path>'</code>.</p>
<h5>Perform inference with Yi base model</h5>
<p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p>
<p><strong>Note</strong> that the only difference is to set <code>--model <your-model-mount-path>'</code> instead of <code>model <your-model-path></code>.</p>
</details>
### Quick start - conda-lock
<details>
<summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary>
<br>
You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies.
<br>
To install the dependencies, follow these steps:
1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>.
2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.
</details>
### Quick start - llama.cpp
<details>
<summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary>
<br>This tutorial guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p>
- [Step 0: Prerequisites](#step-0-prerequisites)
- [Step 1: Download llama.cpp](#step-1-download-llamacpp)
- [Step 2: Download Yi model](#step-2-download-yi-model)
- [Step 3: Perform inference](#step-3-perform-inference)
#### Step 0: Prerequisites
- This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip.
- Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine.
#### Step 1: Download `llama.cpp`
To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command.
```bash
git clone [email protected]:ggerganov/llama.cpp.git
```
#### Step 2: Download Yi model
2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command.
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF
```
2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command.
```bash
git-lfs pull --include yi-chat-6b.Q2_K.gguf
```
#### Step 3: Perform inference
To perform inference with the Yi model, you can use one of the following methods.
- [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal)
- [Method 2: Perform inference in web](#method-2-perform-inference-in-web)
##### Method 1: Perform inference in terminal
To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command.
> ##### Tips
>
> - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model.
>
> - By default, the model operates in completion mode.
>
> - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage.
```bash
make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e
...
How do you feed your pet fox? Please answer this question in 6 simple steps:
Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables.
Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day.
Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise.
Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress.
Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections.
Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care.
...
```
Now you have successfully asked a question to the Yi model and got an answer! 🥳
##### Method 2: Perform inference in web
1. To initialize a lightweight and swift chatbot, run the following command.
```bash
cd llama.cpp
./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf
```
Then you can get an output like this:
```bash
...
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 5000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Pro
ggml_metal_init: picking default device: Apple M2 Pro
ggml_metal_init: ggml.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M2 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
ggml_metal_init: maxTransferRate = built-in GPU
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67)
llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67)
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 159.19 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67)
Available slots:
-> Slot 0 - max context: 2048
llama server listening at http://0.0.0.0:8080
```
2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar.

3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer.

</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Web demo
You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario).
[Step 1: Prepare your environment](#step-1-prepare-your-environment).
[Step 2: Download the Yi model](#step-2-download-the-yi-model).
Step 3. To start a web service locally, run the following command.
```bash
python demo/web_demo.py -c <your-model-path>
```
You can access the web UI by entering the address provided in the console into your browser.

<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Fine-tuning
```bash
bash finetune/scripts/run_sft_Yi_6b.sh
```
Once finished, you can compare the finetuned model and the base model with the following command:
```bash
bash finetune/scripts/run_eval.sh
```
<details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul>
### Finetune code for Yi 6B and 34B
#### Preparation
##### From Image
By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model.
You can also prepare your customized dataset in the following `jsonl` format:
```json
{ "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." }
```
And then mount them in the container to replace the default ones:
```bash
docker run -it \
-v /path/to/save/finetuned/model/:/finetuned-model \
-v /path/to/train.jsonl:/yi/finetune/data/train.json \
-v /path/to/eval.jsonl:/yi/finetune/data/eval.json \
ghcr.io/01-ai/yi:latest \
bash finetune/scripts/run_sft_Yi_6b.sh
```
##### From Local Server
Make sure you have conda. If not, use
```bash
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
source ~/.bashrc
```
Then, create a conda env:
```bash
conda create -n dev_env python=3.10 -y
conda activate dev_env
pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7
```
#### Hardware Setup
For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended.
For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh).
A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB.
#### Quick Start
Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like:
```bash
|-- $MODEL_PATH
| |-- config.json
| |-- pytorch_model-00001-of-00002.bin
| |-- pytorch_model-00002-of-00002.bin
| |-- pytorch_model.bin.index.json
| |-- tokenizer_config.json
| |-- tokenizer.model
| |-- ...
```
Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static.
```bash
|-- $DATA_PATH
| |-- data
| | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet
| | |-- test-00000-of-00001-8c7c51afc6d45980.parquet
| |-- dataset_infos.json
| |-- README.md
```
`finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG)
```bash
|-- $DATA_PATH
|--data
|-- train.jsonl
|-- eval.jsonl
```
`cd` into the scripts folder, copy and paste the script, and run. For example:
```bash
cd finetune/scripts
bash run_sft_Yi_6b.sh
```
For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes.
For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient.
#### Evaluation
```bash
cd finetune/scripts
bash run_eval.sh
```
Then you'll see the answer from both the base model and the finetuned model.
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quantization
#### GPT-Q
```bash
python quantization/gptq/quant_autogptq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulting model as follows:
```bash
python quantization/gptq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
<details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul>
#### GPT-Q quantization
[GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization)
method. It saves memory and provides potential speedups while retaining the accuracy
of the model.
Yi models can be GPT-Q quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and
[exllama](https://github.com/turboderp/exllama).
And the huggingface transformers has integrated optimum and auto-gptq to perform
GPTQ quantization on language models.
##### Do Quantization
The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization:
```bash
python quant_autogptq.py --model /base_model \
--output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code
```
##### Run Quantized Model
You can run a quantized model using the `eval_quantized_model.py`:
```bash
python eval_quantized_model.py --model /quantized_model --trust_remote_code
```
</ul>
</details>
#### AWQ
```bash
python quantization/awq/quant_autoawq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulting model as follows:
```bash
python quantization/awq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
<details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul>
#### AWQ quantization
[AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization)
method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs.
Yi models can be AWQ quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ).
##### Do Quantization
The `quant_autoawq.py` script is provided for you to perform AWQ quantization:
```bash
python quant_autoawq.py --model /base_model \
--output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code
```
##### Run Quantized Model
You can run a quantized model using the `eval_quantized_model.py`:
```bash
python eval_quantized_model.py --model /quantized_model --trust_remote_code
```
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Deployment
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
#### Software requirements
Before using Yi quantized models, make sure you've installed the correct software listed below.
| Model | Software
|---|---
Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi)
Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation)
#### Hardware requirements
Before deploying Yi in your environment, make sure your hardware meets the following requirements.
##### Chat models
| Model | Minimum VRAM | Recommended GPU Example |
|:----------------------|:--------------|:-------------------------------------:|
| Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) |
| Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) |
| Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) |
| Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) |
| Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) |
| Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) |
Below are detailed minimum VRAM requirements under different batch use cases.
| Model | batch=1 | batch=4 | batch=16 | batch=32 |
| ----------------------- | ------- | ------- | -------- | -------- |
| Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB |
| Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB |
| Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB |
| Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB |
| Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB |
| Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB |
##### Base models
| Model | Minimum VRAM | Recommended GPU Example |
|----------------------|--------------|:-------------------------------------:|
| Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) |
| Yi-6B-200K | 50 GB | 1 x A800 (80 GB) |
| Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) |
| Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) |
| Yi-34B-200K | 200 GB | 4 x A800 (80 GB) |
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Learning hub
<details>
<summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary>
<br>
Welcome to the Yi learning hub!
Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.
The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions!
At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below.
With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳
#### Tutorials
##### English tutorials
| Type | Deliverable | Date | Author |
|-------------|--------------------------------------------------------|----------------|----------------|
| Video | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) |
| Blog | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) |
| Video | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) |
| Video | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) |
##### Chinese tutorials
| Type | Deliverable | Date | Author |
|-------------|--------------------------------------------------------|----------------|----------------|
| Blog | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) |
| Blog | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) |
| Blog | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) |
| Blog | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) |
| Blog | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) |
| Blog | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) |
| Video | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) |
| Video | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2023-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) |
</details>
# Why Yi?
- [Ecosystem](#ecosystem)
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
- [Benchmarks](#benchmarks)
- [Chat model performance](#chat-model-performance)
- [Base model performance](#base-model-performance)
- [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k)
- [Yi-9B](#yi-9b)
## Ecosystem
Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity.
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
### Upstream
The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency.
For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto")
```
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Downstream
> 💡 Tip
>
> - Feel free to create a PR and share the fantastic work you've built using the Yi series models.
>
> - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`.
#### Serving
If you want to get up with Yi in a few minutes, you can use the following services built upon Yi.
- Yi-34B-Chat: you can chat with Yi using one of the following platforms:
- [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand!
- [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs.
- [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization.
#### Quantization
If you have limited computational capabilities, you can use Yi's quantized models as follows.
These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage.
- [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
- [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF)
- [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ)
#### Fine-tuning
If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below.
- [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi.
This is not an exhaustive list for Yi, but to name a few sorted on downloads:
- [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ)
- [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ)
- [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ)
- [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
- [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm).
- [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset.
#### API
- [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box.
- [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## Tech report
For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652).
### Citation
```
@misc{ai2024yi,
title={Yi: Open Foundation Models by 01.AI},
author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai},
year={2024},
eprint={2403.04652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Benchmarks
- [Chat model performance](#chat-model-performance)
- [Base model performance](#base-model-performance)
### Chat model performance
Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.

<details>
<summary> Evaluation methods and challenges. ⬇️ </summary>
- **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.
- **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed.
- **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text.
- **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.
<strong>*</strong>: C-Eval results are evaluated on the validation datasets
</details>
### Base model performance
#### Yi-34B and Yi-34B-200K
The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more.

<details>
<summary> Evaluation methods. ⬇️</summary>
- **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.
- **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.
- **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content.
- **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.
- **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension.
- **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code".
- **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated.
</details>
#### Yi-9B
Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.

- In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B.

- In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B.

- In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B.

- In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B.

<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# Who can use Yi?
Everyone! 🙌 ✅
- The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt)
- For free commercial use, you only need to [complete this form](https://www.lingyiwanwu.com/yi-license) to get a Yi Model Commercial License.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# Misc.
### Acknowledgments
A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation.
[](https://github.com/01-ai/yi/graphs/contributors)
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Disclaimer
We use data compliance checking algorithms during the training process, to
ensure the compliance of the trained model to the best of our ability. Due to
complex data and the diversity of language model usage scenarios, we cannot
guarantee that the model will generate correct, and reasonable output in all
scenarios. Please be aware that there is still a risk of the model producing
problematic outputs. We will not be responsible for any risks and issues
resulting from misuse, misguidance, illegal usage, and related misinformation,
as well as any associated data security concerns.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### License
The source code in this repo is licensed under the [Apache 2.0
license](https://github.com/01-ai/Yi/blob/main/LICENSE). The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt).
For free commercial use, you only need to send an email to [get official commercial permission](https://www.lingyiwanwu.com/yi-license).
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
|
{}
|
RichardErkhov/01-ai_-_Yi-6B-gguf
| null |
[
"gguf",
"arxiv:2403.04652",
"arxiv:2311.16502",
"arxiv:2401.11944",
"region:us"
] | null |
2024-04-12T20:18:47+00:00
|
[
"2403.04652",
"2311.16502",
"2401.11944"
] |
[] |
TAGS
#gguf #arxiv-2403.04652 #arxiv-2311.16502 #arxiv-2401.11944 #region-us
|
GGUF quantization made by Richard Erkhov.
Github
Discord
Request more models
Yi-6B - GGUF
* Model creator: URL
* Original model: URL
Name: Yi-6B.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.18GB
Name: Yi-6B.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.41GB
Name: Yi-6B.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.53GB
Name: Yi-6B.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.52GB
Name: Yi-6B.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 2.62GB
Name: Yi-6B.Q3\_K.gguf, Quant method: Q3\_K, Size: 2.79GB
Name: Yi-6B.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 2.79GB
Name: Yi-6B.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.01GB
Name: Yi-6B.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.11GB
Name: Yi-6B.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.24GB
Name: Yi-6B.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.27GB
Name: Yi-6B.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.26GB
Name: Yi-6B.Q4\_K.gguf, Quant method: Q4\_K, Size: 3.42GB
Name: Yi-6B.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 3.42GB
Name: Yi-6B.Q4\_1.gguf, Quant method: Q4\_1, Size: 3.58GB
Name: Yi-6B.Q5\_0.gguf, Quant method: Q5\_0, Size: 3.92GB
Name: Yi-6B.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 3.92GB
Name: Yi-6B.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.01GB
Name: Yi-6B.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.01GB
Name: Yi-6B.Q5\_1.gguf, Quant method: Q5\_1, Size: 4.25GB
Name: Yi-6B.Q6\_K.gguf, Quant method: Q6\_K, Size: 4.63GB
```
Original model description:
---
```
license: other
license\_name: yi-license
license\_link: LICENSE
widget:
* example\_title: "Yi-34B-Chat"
text: "hi"
output:
text: " Hello! How can I assist you today?"
* example\_title: "Yi-34B"
text: "There's a place where time stands still. A place of breath taking wonder, but also"
output:
text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?"
pipeline\_tag: text-generation
---

[](mailto:oss@URL)
### Building the Next Generation of Open-Source and Bilingual LLMs
[Hugging Face](URL target=) • [ModelScope](URL target=) • ️ [WiseModel](URL target=)
Ask questions or discuss ideas on [GitHub](01-ai/Yi · Discussions)
Join us on [Discord](URL target=) or [WeChat](有官方的微信群嘛 · Issue #43 · 01-ai/Yi)
Check out [Grow at [Yi Learning Hub](#learning-hub)](URL Yi Tech Report </a>
</p>
<p align=)
---
Table of Contents
* What is Yi?
+ Introduction
+ Models
- Chat models
- Base models
- Model info
+ News
* How to use Yi?
+ Quick start
- Choose your path
- pip
- docker
- URL
- conda-lock
- Web demo
+ Fine-tuning
+ Quantization
+ Deployment
+ Learning hub
* Why Yi?
+ Ecosystem
- Upstream
- Downstream
* Serving
* Quantization
* Fine-tuning
* API
+ Benchmarks
- Base model performance
- Chat model performance
+ Tech report
- Citation
* Who can use Yi?
* Misc.
+ Acknowledgements
+ Disclaimer
+ License
---
What is Yi?
===========
Introduction
------------
* The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI.
* Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
* Yi-34B-Chat model landed in second place (following GPT-4 Turbo), outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).
* Yi-34B model ranked first among all existing open-source models (such as Falcon-180B, Llama-70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).
* (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.
If you're interested in Yi's adoption of Llama architecture and license usage policy, see Yi's relation with Llama. ⬇️
>
> TL;DR
>
>
> The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama.
>
>
>
+ Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.
+ Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
+ Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.
+ However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.
- As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.
- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the Alpaca Leaderboard in Dec 2023.
[
[Back to top ⬆️](#top) ]
News
----
**2024-03-16**: The `Yi-9B-200K` is open-sourced and available to the public.
**2024-03-08**: [**2024-03-06**: The `Yi-9B` is open-sourced and available to the public.
`Yi-9B` stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.
**2024-01-23**: The Yi-VL models, `[`[Chat models](URL (based on data available up to January 2024).</li>
</details>
<details>
<summary> <b>2023-11-23</b>: <a href=) are open-sourced and available to the public.`](URL and <code><a href=)`
This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.
* 'Yi-34B-Chat'
* 'Yi-34B-Chat-4bits'
* 'Yi-34B-Chat-8bits'
* 'Yi-6B-Chat'
* 'Yi-6B-Chat-4bits'
* 'Yi-6B-Chat-8bits'
You can try some of them interactively at:
* Hugging Face
* Replicate
**2023-11-23**: The Yi Series Models Community License Agreement is updated to [The base models,](URL
</details>
<details>
<summary> <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary>
<br>Application form:
<ul>
<li>English</li>
<li>Chinese</li>
</ul>
</details>
<details>
<summary> <b>2023-11-05</b>: <a href=) `Yi-6B-200K` and `Yi-34B-200K`, are open-sourced and available to the public.
This release contains two base models with the same parameter sizes as the previous
release, except that the context window is extended to 200K.
**2023-11-02**: [The base models,](#base-models) `Yi-6B` and `Yi-34B`, are open-sourced and available to the public.
The first public release contains two bilingual (English/Chinese) base models
with the parameter sizes of 6B and 34B. Both of them are trained with 4K
sequence length and can be extended to 32K during inference time.
[
[Back to top ⬆️](#top) ]
Models
------
Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
### Chat models
- 4-bit series models are quantized by AWQ.
- 8-bit series models are quantized by GPTQ
- All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090).
### Base models
- 200k is roughly equivalent to 400,000 Chinese characters.
- If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run 'git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf' to download the weight.
### Model info
* For chat and base models
Model: 9B series models, Intro: It is the best at coding and math in the Yi series models., Default context window: Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.
Model: 34B series models, Intro: They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability., Default context window: 3T
* For chat models
For chat model limitations, see the explanations below. ⬇️
The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.
However, this higher diversity might amplify certain existing issues, including:
+ Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.
+ Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.
+ Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.
+ To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top\_p, or top\_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.](URL Tech Report</a> is published! </summary>
</details>
<details open>
<summary> <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>
<br>
In the )
[
[Back to top ⬆️](#top) ]
How to use Yi?
==============
* Quick start
+ Choose your path
+ pip
+ docker
+ conda-lock
+ URL
+ Web demo
* Fine-tuning
* Quantization
* Deployment
* Learning hub
Quick start
-----------
Getting up and running with Yi models is simple with multiple choices available.
### Choose your path
Select one of the following paths to begin your journey with Yi!
!Quick start - Choose your path
#### Deploy Yi locally
If you prefer to deploy Yi models locally,
* ️ and you have sufficient resources (for example, NVIDIA A800 80GB), you can choose one of the following methods:
+ pip
+ Docker
+ conda-lock
* ️ and you have limited resources (for example, a MacBook Pro), you can use URL.
#### Not to deploy Yi locally
If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options.
##### ️ Run Yi with APIs
If you want to explore more features of Yi, you can adopt one of these methods:
* Yi APIs (Yi official)
+ Early access has been granted to some applicants. Stay tuned for the next round of access!
* Yi APIs (Replicate)
##### ️ Run Yi in playground
If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options:
* Yi-34B-Chat-Playground (Yi official)
+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).
* Yi-34B-Chat-Playground (Replicate)
##### ️ Chat with Yi
If you want to chat with Yi, you can use one of these online services, which offer a similar user experience:
* Yi-34B-Chat (Yi official on Hugging Face)
+ No registration is required.
* Yi-34B-Chat (Yi official beta)
+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).
[
[Back to top ⬆️](#top) ]
### Quick start - pip
This tutorial guides you through every step of running Yi-34B-Chat locally on an A800 (80G) and then performing inference.
#### Step 0: Prerequisites
* Make sure Python 3.10 or a later version is installed.
* If you want to run other Yi models, see software and hardware requirements.
#### Step 1: Prepare your environment
To set up the environment and install the required packages, execute the following command.
#### Step 2: Download the Yi model
You can download the weights and tokenizer of Yi models from the following sources:
* Hugging Face
* ModelScope
* WiseModel
#### Step 3: Perform inference
You can perform inference with Yi chat or base models as below.
##### Perform inference with Yi chat model
1. Create a file named 'quick\_start.py' and copy the following content to it.
2. Run 'quick\_start.py'.
Then you can see an output similar to the one below.
##### Perform inference with Yi base model
* Yi-34B
The steps are similar to pip - Perform inference with Yi chat model.
You can use the existing file 'text\_generation.py'.
Then you can see an output similar to the one below.
Output. ⬇️
Prompt: Let me tell you an interesting story about cat Tom and mouse Jerry,
Generation: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up...
* Yi-9B
Input
Output
[
[Back to top ⬆️](#top) ]
### Quick start - Docker
Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️
This tutorial guides you through every step of running **Yi-34B-Chat on an A800 GPU** or **4\*4090** locally and then performing inference.
#### Step 0: Prerequisites
Make sure you've installed [Step 1: Start Docker
```
docker run -it --gpus all \
-v <your-model-path>: /models
URL
```
Alternatively, you can pull the Yi Docker image from `URL
#### Step 2: Perform inference
You can perform inference with Yi chat or base models as below.
##### Perform inference with Yi chat model
The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model).
**Note** that the only difference is to set `model_path = '<your-model-mount-path>'` instead of `model_path = '<your-model-path>'`.
##### Perform inference with Yi base model
The steps are similar to [pip - Perform inference with Yi base model](#perform-inference-with-yi-base-model).
**Note** that the only difference is to set `--model <your-model-mount-path>'` instead of `model <your-model-path>`.`](URL and <a href=)
### Quick start - conda-lock
You can use `[[* Step 0: Prerequisites
* Step 1: Download URL
* Step 2: Download Yi model
* Step 3: Perform inference
#### Step 0: Prerequisites
* This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip.
* Make sure 'git-lfs' is installed on your machine.
#### Step 1: Download 'URL'
To clone the 'URL' repository, run the following command.
#### Step 2: Download Yi model
2.1 To clone XeIaso/yi-chat-6B-GGUF with just pointers, run the following command.
2.2 To download a quantized Yi model (yi-chat-6b.Q2\_K.gguf), run the following command.
#### Step 3: Perform inference
To perform inference with the Yi model, you can use one of the following methods.
* Method 1: Perform inference in terminal
* Method 2: Perform inference in web
##### Method 1: Perform inference in terminal
To compile 'URL' using 4 threads and then conduct inference, navigate to the 'URL' directory, and run the following command.
>
> ##### Tips
>
>
> * Replace '/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2\_K.gguf' with the actual path of your model.
> * By default, the model operates in completion mode.
> * For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run './main -h' to check detailed descriptions and usage.
>
>
>
Now you have successfully asked a question to the Yi model and got an answer!
##### Method 2: Perform inference in web
1. To initialize a lightweight and swift chatbot, run the following command.
Then you can get an output like this:
2. To access the chatbot interface, open your web browser and enter 'http://0.0.0.0:8080' into the address bar.
!Yi model chatbot interface - URL
3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer.
!Ask a question to Yi model - URL](URL for installing these dependencies.
<br>
To install the dependencies, follow these steps:
<ol>
<li>
<p>Install micromamba by following the instructions available <a href="URL</p>
</li>
<li>
<p>Execute <code>micromamba install -y -n yi -f URL</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.</p>
</li>
</ol>
</details>
<h3>Quick start - URL</h3>
<details>
<summary> Run Yi-chat-6B-2bits locally with URL: a step-by-step guide. ⬇️</summary>
<br>This tutorial guides you through every step of running a quantized model (<a href=)](URL to generate fully reproducible lock files for conda environments. ⬇️</summary>
<br>
You can refer to <a href=)`
[
[Back to top ⬆️](#top) ]
### Web demo
You can build a web UI demo for Yi chat models (note that Yi base models are not supported in this senario).
Step 1: Prepare your environment.
Step 2: Download the Yi model.
Step 3. To start a web service locally, run the following command.
You can access the web UI by entering the address provided in the console into your browser.
!Quick start - web demo
[
[Back to top ⬆️](#top) ]
### Fine-tuning
Once finished, you can compare the finetuned model and the base model with the following command:
For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ ### Finetune code for Yi 6B and 34B
#### Preparation
##### From Image
By default, we use a small dataset from BAAI/COIG to finetune the base model.
You can also prepare your customized dataset in the following 'jsonl' format:
And then mount them in the container to replace the default ones:
##### From Local Server
Make sure you have conda. If not, use
Then, create a conda env:
#### Hardware Setup
For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended.
For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA\_VISIBLE\_DEVICES to limit the number of GPUs (as shown in scripts/run\_sft\_Yi\_34b.sh).
A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA\_VISIBLE\_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB.
#### Quick Start
Download a LLM-base model to MODEL\_PATH (6B and 34B). A typical folder of models is like:
Download a dataset from huggingface to local storage DATA\_PATH, e.g. Dahoas/rm-static.
'finetune/yi\_example\_dataset' has example datasets, which are modified from BAAI/COIG
'cd' into the scripts folder, copy and paste the script, and run. For example:
For the Yi-6B base model, setting training\_debug\_steps=20 and num\_train\_epochs=4 can output a chat model, which takes about 20 minutes.
For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient.
#### Evaluation
Then you'll see the answer from both the base model and the finetuned model.
[
[Back to top ⬆️](#top) ]
### Quantization
#### GPT-Q
Once finished, you can then evaluate the resulting model as follows:
For details, see the explanations below. ⬇️ #### GPT-Q quantization
GPT-Q is a PTQ (Post-Training Quantization)
method. It saves memory and provides potential speedups while retaining the accuracy
of the model.
Yi models can be GPT-Q quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run GPT-Q, we will use AutoGPTQ and
exllama.
And the huggingface transformers has integrated optimum and auto-gptq to perform
GPTQ quantization on language models.
##### Do Quantization
The 'quant\_autogptq.py' script is provided for you to perform GPT-Q quantization:
##### Run Quantized Model
You can run a quantized model using the 'eval\_quantized\_model.py':
#### AWQ
Once finished, you can then evaluate the resulting model as follows:
For details, see the explanations below. ⬇️ #### AWQ quantization
AWQ is a PTQ (Post-Training Quantization)
method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs.
Yi models can be AWQ quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run AWQ, we will use AutoAWQ.
##### Do Quantization
The 'quant\_autoawq.py' script is provided for you to perform AWQ quantization:
##### Run Quantized Model
You can run a quantized model using the 'eval\_quantized\_model.py':
[
[Back to top ⬆️](#top) ]
### Deployment
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
#### Software requirements
Before using Yi quantized models, make sure you've installed the correct software listed below.
#### Hardware requirements
Before deploying Yi in your environment, make sure your hardware meets the following requirements.
##### Chat models
Below are detailed minimum VRAM requirements under different batch use cases.
##### Base models
[
[Back to top ⬆️](#top) ]
### Learning hub
If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️
Welcome to the Yi learning hub!
Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.
The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions!
At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below.
With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning!
#### Tutorials
##### English tutorials
##### Chinese tutorials
Why Yi?
=======
* Ecosystem
+ Upstream
+ Downstream
- Serving
- Quantization
- Fine-tuning
- API
* Benchmarks
+ Chat model performance
+ Base model performance
- Yi-34B and Yi-34B-200K
- Yi-9B
Ecosystem
---------
Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity.
* Upstream
* Downstream
+ Serving
+ Quantization
+ Fine-tuning
+ API
### Upstream
The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency.
For example, the Yi series models are saved in the format of the Llama model. You can directly use 'LlamaForCausalLM' and 'LlamaTokenizer' to load the model. For more information, see Use the chat model.
[
[Back to top ⬆️](#top) ]
### Downstream
>
> Tip
>
>
> * Feel free to create a PR and share the fantastic work you've built using the Yi series models.
> * To help others quickly understand your work, it is recommended to use the format of ': + '.
>
>
>
#### Serving
If you want to get up with Yi in a few minutes, you can use the following services built upon Yi.
* Yi-34B-Chat: you can chat with Yi using one of the following platforms:
+ Yi-34B-Chat | Hugging Face
+ Yi-34B-Chat | Yi Platform: Note that currently it's available through a whitelist. Welcome to apply (fill out a form in English or Chinese) and experience it firsthand!
* Yi-6B-Chat (Replicate): you can use this model with more options by setting additional parameters and calling APIs.
* ScaleLLM: you can use this service to run Yi models locally with added flexibility and customization.
#### Quantization
If you have limited computational capabilities, you can use Yi's quantized models as follows.
These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage.
* TheBloke/Yi-34B-GPTQ
* TheBloke/Yi-34B-GGUF
* TheBloke/Yi-34B-AWQ
#### Fine-tuning
If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below.
* TheBloke Models: this site hosts numerous fine-tuned models derived from various LLMs including Yi.
This is not an exhaustive list for Yi, but to name a few sorted on downloads:
+ TheBloke/dolphin-2\_2-yi-34b-AWQ
+ TheBloke/Yi-34B-Chat-AWQ
+ TheBloke/Yi-34B-Chat-GPTQ
* SUSTech/SUS-Chat-34B: this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the Open LLM Leaderboard.
* OrionStarAI/OrionStar-Yi-34B-Chat-Llama: this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the OpenCompass LLM Leaderboard.
* NousResearch/Nous-Capybara-34B: this model is trained with 200K context length and 3 epochs on the Capybara dataset.
#### API
* amazing-openai-api: this tool converts Yi model APIs into the OpenAI API format out of the box.
* LlamaEdge: this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust.
[
[Back to top ⬆️](#top) ]
Tech report
-----------
For detailed capabilities of the Yi series model, see Yi: Open Foundation Models by 01.AI.
Benchmarks
----------
* Chat model performance
* Base model performance
### Chat model performance
Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.
!Chat model performance
Evaluation methods and challenges. ⬇️
* Evaluation methods: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.
* Zero-shot vs. few-shot: in chat models, the zero-shot approach is more commonly employed.
* Evaluation strategy: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text.
* Challenges faced: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.
**\***: C-Eval results are evaluated on the validation datasets
### Base model performance
#### Yi-34B and Yi-34B-200K
The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more.
!Base model performance
Evaluation methods. ⬇️
* Disparity in results: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.
* Investigation findings: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.
* Uniform benchmarking process: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content.
* Efforts to retrieve unreported scores: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.
* Extensive model evaluation: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension.
* Special configurations: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code".
* Falcon-180B caveat: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated.
#### Yi-9B
Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.
!Yi-9B benchmark - details
* In terms of overall ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B.
!Yi-9B benchmark - overall
* In terms of coding ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B.
!Yi-9B benchmark - code
* In terms of math ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B.
!Yi-9B benchmark - math
* In terms of common sense and reasoning ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B.
!Yi-9B benchmark - text
[
[Back to top ⬆️](#top) ]
Who can use Yi?
===============
Everyone!
* The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the Yi Series Models Community License Agreement 2.1
* For free commercial use, you only need to complete this form to get a Yi Model Commercial License.
[
[Back to top ⬆️](#top) ]
Misc.
=====
### Acknowledgments
A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation.
 ]
### Disclaimer
We use data compliance checking algorithms during the training process, to
ensure the compliance of the trained model to the best of our ability. Due to
complex data and the diversity of language model usage scenarios, we cannot
guarantee that the model will generate correct, and reasonable output in all
scenarios. Please be aware that there is still a risk of the model producing
problematic outputs. We will not be responsible for any risks and issues
resulting from misuse, misguidance, illegal usage, and related misinformation,
as well as any associated data security concerns.
[
[Back to top ⬆️](#top) ]
### License
The source code in this repo is licensed under the Apache 2.0
license. The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the Yi Series Models Community License Agreement 2.1.
For free commercial use, you only need to send an email to get official commercial permission.
[
[Back to top ⬆️](#top) ]
|
[
"### Building the Next Generation of Open-Source and Bilingual LLMs\n\n\n\n\n[Hugging Face](URL target=) • [ModelScope](URL target=) • ️ [WiseModel](URL target=)\n\n\n\n\n Ask questions or discuss ideas on [GitHub](01-ai/Yi · Discussions) \n\n\n\n\n Join us on [Discord](URL target=) or [WeChat](有官方的微信群嘛 · Issue #43 · 01-ai/Yi) \n\n\n\n\n Check out [Grow at [Yi Learning Hub](#learning-hub)](URL Yi Tech Report </a>\n</p> \n<p align=)\n\n\n\n\n---\n\n\n\n Table of Contents\n* What is Yi?\n\t+ Introduction\n\t+ Models\n\t\t- Chat models\n\t\t- Base models\n\t\t- Model info\n\t+ News\n* How to use Yi?\n\t+ Quick start\n\t\t- Choose your path\n\t\t- pip\n\t\t- docker\n\t\t- URL\n\t\t- conda-lock\n\t\t- Web demo\n\t+ Fine-tuning\n\t+ Quantization\n\t+ Deployment\n\t+ Learning hub\n* Why Yi?\n\t+ Ecosystem\n\t\t- Upstream\n\t\t- Downstream\n\t\t\t* Serving\n\t\t\t* Quantization\n\t\t\t* Fine-tuning\n\t\t\t* API\n\t+ Benchmarks\n\t\t- Base model performance\n\t\t- Chat model performance\n\t+ Tech report\n\t\t- Citation\n* Who can use Yi?\n* Misc.\n\t+ Acknowledgements\n\t+ Disclaimer\n\t+ License\n\n\n\n\n\n---\n\n\nWhat is Yi?\n===========\n\n\nIntroduction\n------------\n\n\n* The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI.\n* Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,\n* Yi-34B-Chat model landed in second place (following GPT-4 Turbo), outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).\n* Yi-34B model ranked first among all existing open-source models (such as Falcon-180B, Llama-70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).\n* (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.\n\n\n If you're interested in Yi's adoption of Llama architecture and license usage policy, see Yi's relation with Llama. ⬇️ \n\n\n> \n> TL;DR\n> \n> \n> The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama.\n> \n> \n> \n\n+ Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.\n+ Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.\n+ Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.\n+ However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.\n\n\n\t- As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.\n\t- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the Alpaca Leaderboard in Dec 2023.\n\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nNews\n----\n\n\n\n **2024-03-16**: The `Yi-9B-200K` is open-sourced and available to the public.\n\n\n **2024-03-08**: [**2024-03-06**: The `Yi-9B` is open-sourced and available to the public.\n \n\n`Yi-9B` stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.\n\n\n **2024-01-23**: The Yi-VL models, `[`[Chat models](URL (based on data available up to January 2024).</li>\n</details>\n<details>\n<summary> <b>2023-11-23</b>: <a href=) are open-sourced and available to the public.`](URL and <code><a href=)`\n \nThis release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.\n* 'Yi-34B-Chat'\n* 'Yi-34B-Chat-4bits'\n* 'Yi-34B-Chat-8bits'\n* 'Yi-6B-Chat'\n* 'Yi-6B-Chat-4bits'\n* 'Yi-6B-Chat-8bits'\n\n\nYou can try some of them interactively at:\n\n\n* Hugging Face\n* Replicate\n\n\n\n\n **2023-11-23**: The Yi Series Models Community License Agreement is updated to [The base models,](URL\n</details>\n<details> \n<summary> <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary>\n<br>Application form:\n<ul>\n<li>English</li>\n<li>Chinese</li>\n</ul>\n</details>\n<details>\n<summary> <b>2023-11-05</b>: <a href=) `Yi-6B-200K` and `Yi-34B-200K`, are open-sourced and available to the public.\n \nThis release contains two base models with the same parameter sizes as the previous\nrelease, except that the context window is extended to 200K.\n\n\n **2023-11-02**: [The base models,](#base-models) `Yi-6B` and `Yi-34B`, are open-sourced and available to the public.\n \nThe first public release contains two bilingual (English/Chinese) base models\nwith the parameter sizes of 6B and 34B. Both of them are trained with 4K\nsequence length and can be extended to 32K during inference time.\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nModels\n------\n\n\nYi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.\n\n\nIf you want to deploy Yi models, make sure you meet the software and hardware requirements.",
"### Chat models\n\n\n\n - 4-bit series models are quantized by AWQ. \n - 8-bit series models are quantized by GPTQ \n - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090).",
"### Base models\n\n\n\n - 200k is roughly equivalent to 400,000 Chinese characters. \n - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run 'git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf' to download the weight.",
"### Model info\n\n\n* For chat and base models\n\n\nModel: 9B series models, Intro: It is the best at coding and math in the Yi series models., Default context window: Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.\nModel: 34B series models, Intro: They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability., Default context window: 3T\n\n\n* For chat models\n\n\nFor chat model limitations, see the explanations below. ⬇️\n\n\t \n\tThe released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.\n\t \n\tHowever, this higher diversity might amplify certain existing issues, including:\n\t+ Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.\n\t\n\t+ Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.\n\t\n\t+ Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.\n\t\n\t+ To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top\\_p, or top\\_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.](URL Tech Report</a> is published! </summary>\n</details>\n<details open>\n <summary> <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>\n <br>\nIn the )\n [\n [Back to top ⬆️](#top) ] \n\n\n\nHow to use Yi?\n==============\n\n\n* Quick start\n\t+ Choose your path\n\t+ pip\n\t+ docker\n\t+ conda-lock\n\t+ URL\n\t+ Web demo\n* Fine-tuning\n* Quantization\n* Deployment\n* Learning hub\n\n\nQuick start\n-----------\n\n\nGetting up and running with Yi models is simple with multiple choices available.",
"### Choose your path\n\n\nSelect one of the following paths to begin your journey with Yi!\n\n\n!Quick start - Choose your path",
"#### Deploy Yi locally\n\n\nIf you prefer to deploy Yi models locally,\n\n\n* ️ and you have sufficient resources (for example, NVIDIA A800 80GB), you can choose one of the following methods:\n\n\n\t+ pip\n\t+ Docker\n\t+ conda-lock\n* ️ and you have limited resources (for example, a MacBook Pro), you can use URL.",
"#### Not to deploy Yi locally\n\n\nIf you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options.",
"##### ️ Run Yi with APIs\n\n\nIf you want to explore more features of Yi, you can adopt one of these methods:\n\n\n* Yi APIs (Yi official)\n\n\n\t+ Early access has been granted to some applicants. Stay tuned for the next round of access!\n* Yi APIs (Replicate)",
"##### ️ Run Yi in playground\n\n\nIf you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options:\n\n\n* Yi-34B-Chat-Playground (Yi official)\n\n\n\t+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).\n* Yi-34B-Chat-Playground (Replicate)",
"##### ️ Chat with Yi\n\n\nIf you want to chat with Yi, you can use one of these online services, which offer a similar user experience:\n\n\n* Yi-34B-Chat (Yi official on Hugging Face)\n\n\n\t+ No registration is required.\n* Yi-34B-Chat (Yi official beta)\n\n\n\t+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Quick start - pip\n\n\nThis tutorial guides you through every step of running Yi-34B-Chat locally on an A800 (80G) and then performing inference.",
"#### Step 0: Prerequisites\n\n\n* Make sure Python 3.10 or a later version is installed.\n* If you want to run other Yi models, see software and hardware requirements.",
"#### Step 1: Prepare your environment\n\n\nTo set up the environment and install the required packages, execute the following command.",
"#### Step 2: Download the Yi model\n\n\nYou can download the weights and tokenizer of Yi models from the following sources:\n\n\n* Hugging Face\n* ModelScope\n* WiseModel",
"#### Step 3: Perform inference\n\n\nYou can perform inference with Yi chat or base models as below.",
"##### Perform inference with Yi chat model\n\n\n1. Create a file named 'quick\\_start.py' and copy the following content to it.\n2. Run 'quick\\_start.py'.\n\n\nThen you can see an output similar to the one below.",
"##### Perform inference with Yi base model\n\n\n* Yi-34B\n\n\nThe steps are similar to pip - Perform inference with Yi chat model.\n\n\nYou can use the existing file 'text\\_generation.py'.\n\n\nThen you can see an output similar to the one below.\n\n\n\nOutput. ⬇️ \n \n\nPrompt: Let me tell you an interesting story about cat Tom and mouse Jerry,\n\n\nGeneration: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up...\n* Yi-9B\n\n\nInput\n\n\nOutput\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Quick start - Docker\n\n\n\n Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️\n \nThis tutorial guides you through every step of running **Yi-34B-Chat on an A800 GPU** or **4\\*4090** locally and then performing inference.\n #### Step 0: Prerequisites\n\n\nMake sure you've installed [Step 1: Start Docker \n\n```\ndocker run -it --gpus all \\\n-v <your-model-path>: /models\nURL\n\n```\n\nAlternatively, you can pull the Yi Docker image from `URL",
"#### Step 2: Perform inference\n\n\nYou can perform inference with Yi chat or base models as below.",
"##### Perform inference with Yi chat model\n\n\nThe steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model).\n\n\n**Note** that the only difference is to set `model_path = '<your-model-mount-path>'` instead of `model_path = '<your-model-path>'`.",
"##### Perform inference with Yi base model\n\n\nThe steps are similar to [pip - Perform inference with Yi base model](#perform-inference-with-yi-base-model).\n\n\n**Note** that the only difference is to set `--model <your-model-mount-path>'` instead of `model <your-model-path>`.`](URL and <a href=)",
"### Quick start - conda-lock\n\n\n\nYou can use `[[* Step 0: Prerequisites\n* Step 1: Download URL\n* Step 2: Download Yi model\n* Step 3: Perform inference",
"#### Step 0: Prerequisites\n\n\n* This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip.\n* Make sure 'git-lfs' is installed on your machine.",
"#### Step 1: Download 'URL'\n\n\nTo clone the 'URL' repository, run the following command.",
"#### Step 2: Download Yi model\n\n\n2.1 To clone XeIaso/yi-chat-6B-GGUF with just pointers, run the following command.\n\n\n2.2 To download a quantized Yi model (yi-chat-6b.Q2\\_K.gguf), run the following command.",
"#### Step 3: Perform inference\n\n\nTo perform inference with the Yi model, you can use one of the following methods.\n\n\n* Method 1: Perform inference in terminal\n* Method 2: Perform inference in web",
"##### Method 1: Perform inference in terminal\n\n\nTo compile 'URL' using 4 threads and then conduct inference, navigate to the 'URL' directory, and run the following command.\n\n\n\n> \n> ##### Tips\n> \n> \n> * Replace '/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2\\_K.gguf' with the actual path of your model.\n> * By default, the model operates in completion mode.\n> * For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run './main -h' to check detailed descriptions and usage.\n> \n> \n> \n\n\nNow you have successfully asked a question to the Yi model and got an answer!",
"##### Method 2: Perform inference in web\n\n\n1. To initialize a lightweight and swift chatbot, run the following command.\n\n\nThen you can get an output like this:\n2. To access the chatbot interface, open your web browser and enter 'http://0.0.0.0:8080' into the address bar.\n\n\n!Yi model chatbot interface - URL\n3. Enter a question, such as \"How do you feed your pet fox? Please answer this question in 6 simple steps\" into the prompt window, and you will receive a corresponding answer.\n\n\n!Ask a question to Yi model - URL](URL for installing these dependencies.\n<br>\nTo install the dependencies, follow these steps:\n<ol>\n<li>\n<p>Install micromamba by following the instructions available <a href=\"URL</p>\n</li>\n<li>\n<p>Execute <code>micromamba install -y -n yi -f URL</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.</p>\n</li>\n</ol>\n</details>\n<h3>Quick start - URL</h3>\n<details>\n<summary> Run Yi-chat-6B-2bits locally with URL: a step-by-step guide. ⬇️</summary> \n<br>This tutorial guides you through every step of running a quantized model (<a href=)](URL to generate fully reproducible lock files for conda environments. ⬇️</summary>\n<br>\nYou can refer to <a href=)`\n [\n [Back to top ⬆️](#top) ]",
"### Web demo\n\n\nYou can build a web UI demo for Yi chat models (note that Yi base models are not supported in this senario).\n\n\nStep 1: Prepare your environment.\n\n\nStep 2: Download the Yi model.\n\n\nStep 3. To start a web service locally, run the following command.\n\n\nYou can access the web UI by entering the address provided in the console into your browser.\n\n\n!Quick start - web demo\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Fine-tuning\n\n\nOnce finished, you can compare the finetuned model and the base model with the following command:\n\n\nFor advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ ### Finetune code for Yi 6B and 34B",
"#### Preparation",
"##### From Image\n\n\nBy default, we use a small dataset from BAAI/COIG to finetune the base model.\nYou can also prepare your customized dataset in the following 'jsonl' format:\n\n\nAnd then mount them in the container to replace the default ones:",
"##### From Local Server\n\n\nMake sure you have conda. If not, use\n\n\nThen, create a conda env:",
"#### Hardware Setup\n\n\nFor the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended.\n\n\nFor the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA\\_VISIBLE\\_DEVICES to limit the number of GPUs (as shown in scripts/run\\_sft\\_Yi\\_34b.sh).\n\n\nA typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA\\_VISIBLE\\_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB.",
"#### Quick Start\n\n\nDownload a LLM-base model to MODEL\\_PATH (6B and 34B). A typical folder of models is like:\n\n\nDownload a dataset from huggingface to local storage DATA\\_PATH, e.g. Dahoas/rm-static.\n\n\n'finetune/yi\\_example\\_dataset' has example datasets, which are modified from BAAI/COIG\n\n\n'cd' into the scripts folder, copy and paste the script, and run. For example:\n\n\nFor the Yi-6B base model, setting training\\_debug\\_steps=20 and num\\_train\\_epochs=4 can output a chat model, which takes about 20 minutes.\n\n\nFor the Yi-34B base model, it takes a relatively long time for initialization. Please be patient.",
"#### Evaluation\n\n\nThen you'll see the answer from both the base model and the finetuned model.\n\n\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Quantization",
"#### GPT-Q\n\n\nOnce finished, you can then evaluate the resulting model as follows:\n\n\nFor details, see the explanations below. ⬇️ #### GPT-Q quantization\n\n\nGPT-Q is a PTQ (Post-Training Quantization)\nmethod. It saves memory and provides potential speedups while retaining the accuracy\nof the model.\n\n\nYi models can be GPT-Q quantized without a lot of efforts.\nWe provide a step-by-step tutorial below.\n\n\nTo run GPT-Q, we will use AutoGPTQ and\nexllama.\nAnd the huggingface transformers has integrated optimum and auto-gptq to perform\nGPTQ quantization on language models.",
"##### Do Quantization\n\n\nThe 'quant\\_autogptq.py' script is provided for you to perform GPT-Q quantization:",
"##### Run Quantized Model\n\n\nYou can run a quantized model using the 'eval\\_quantized\\_model.py':",
"#### AWQ\n\n\nOnce finished, you can then evaluate the resulting model as follows:\n\n\nFor details, see the explanations below. ⬇️ #### AWQ quantization\n\n\nAWQ is a PTQ (Post-Training Quantization)\nmethod. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs.\n\n\nYi models can be AWQ quantized without a lot of efforts.\nWe provide a step-by-step tutorial below.\n\n\nTo run AWQ, we will use AutoAWQ.",
"##### Do Quantization\n\n\nThe 'quant\\_autoawq.py' script is provided for you to perform AWQ quantization:",
"##### Run Quantized Model\n\n\nYou can run a quantized model using the 'eval\\_quantized\\_model.py':\n\n\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Deployment\n\n\nIf you want to deploy Yi models, make sure you meet the software and hardware requirements.",
"#### Software requirements\n\n\nBefore using Yi quantized models, make sure you've installed the correct software listed below.",
"#### Hardware requirements\n\n\nBefore deploying Yi in your environment, make sure your hardware meets the following requirements.",
"##### Chat models\n\n\n\nBelow are detailed minimum VRAM requirements under different batch use cases.",
"##### Base models\n\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Learning hub\n\n\n\n If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️\n \n\nWelcome to the Yi learning hub!\n\n\nWhether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.\n\n\nThe content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions!\n\n\nAt the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below.\n\n\nWith all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning!",
"#### Tutorials",
"##### English tutorials",
"##### Chinese tutorials\n\n\n\n\nWhy Yi?\n=======\n\n\n* Ecosystem\n\t+ Upstream\n\t+ Downstream\n\t\t- Serving\n\t\t- Quantization\n\t\t- Fine-tuning\n\t\t- API\n* Benchmarks\n\t+ Chat model performance\n\t+ Base model performance\n\t\t- Yi-34B and Yi-34B-200K\n\t\t- Yi-9B\n\n\nEcosystem\n---------\n\n\nYi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity.\n\n\n* Upstream\n* Downstream\n\t+ Serving\n\t+ Quantization\n\t+ Fine-tuning\n\t+ API",
"### Upstream\n\n\nThe Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency.\n\n\nFor example, the Yi series models are saved in the format of the Llama model. You can directly use 'LlamaForCausalLM' and 'LlamaTokenizer' to load the model. For more information, see Use the chat model.\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Downstream\n\n\n\n> \n> Tip\n> \n> \n> * Feel free to create a PR and share the fantastic work you've built using the Yi series models.\n> * To help others quickly understand your work, it is recommended to use the format of ': + '.\n> \n> \n>",
"#### Serving\n\n\nIf you want to get up with Yi in a few minutes, you can use the following services built upon Yi.\n\n\n* Yi-34B-Chat: you can chat with Yi using one of the following platforms:\n\n\n\t+ Yi-34B-Chat | Hugging Face\n\t+ Yi-34B-Chat | Yi Platform: Note that currently it's available through a whitelist. Welcome to apply (fill out a form in English or Chinese) and experience it firsthand!\n* Yi-6B-Chat (Replicate): you can use this model with more options by setting additional parameters and calling APIs.\n* ScaleLLM: you can use this service to run Yi models locally with added flexibility and customization.",
"#### Quantization\n\n\nIf you have limited computational capabilities, you can use Yi's quantized models as follows.\n\n\nThese quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage.\n\n\n* TheBloke/Yi-34B-GPTQ\n* TheBloke/Yi-34B-GGUF\n* TheBloke/Yi-34B-AWQ",
"#### Fine-tuning\n\n\nIf you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below.\n\n\n* TheBloke Models: this site hosts numerous fine-tuned models derived from various LLMs including Yi.\n\n\nThis is not an exhaustive list for Yi, but to name a few sorted on downloads:\n\n\n\t+ TheBloke/dolphin-2\\_2-yi-34b-AWQ\n\t+ TheBloke/Yi-34B-Chat-AWQ\n\t+ TheBloke/Yi-34B-Chat-GPTQ\n* SUSTech/SUS-Chat-34B: this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the Open LLM Leaderboard.\n* OrionStarAI/OrionStar-Yi-34B-Chat-Llama: this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the OpenCompass LLM Leaderboard.\n* NousResearch/Nous-Capybara-34B: this model is trained with 200K context length and 3 epochs on the Capybara dataset.",
"#### API\n\n\n* amazing-openai-api: this tool converts Yi model APIs into the OpenAI API format out of the box.\n* LlamaEdge: this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust.\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nTech report\n-----------\n\n\nFor detailed capabilities of the Yi series model, see Yi: Open Foundation Models by 01.AI.\n\n\nBenchmarks\n----------\n\n\n* Chat model performance\n* Base model performance",
"### Chat model performance\n\n\nYi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.\n\n\n!Chat model performance\n\n\n\n Evaluation methods and challenges. ⬇️ \n* Evaluation methods: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.\n* Zero-shot vs. few-shot: in chat models, the zero-shot approach is more commonly employed.\n* Evaluation strategy: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text.\n* Challenges faced: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.\n\n\n**\\***: C-Eval results are evaluated on the validation datasets",
"### Base model performance",
"#### Yi-34B and Yi-34B-200K\n\n\nThe Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more.\n\n\n!Base model performance\n\n\n\n Evaluation methods. ⬇️\n* Disparity in results: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.\n* Investigation findings: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.\n* Uniform benchmarking process: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content.\n* Efforts to retrieve unreported scores: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.\n* Extensive model evaluation: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension.\n* Special configurations: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category \"Math & Code\".\n* Falcon-180B caveat: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated.",
"#### Yi-9B\n\n\nYi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.\n\n\n!Yi-9B benchmark - details\n\n\n* In terms of overall ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - overall\n* In terms of coding ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - code\n* In terms of math ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - math\n* In terms of common sense and reasoning ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - text\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nWho can use Yi?\n===============\n\n\nEveryone!\n\n\n* The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the Yi Series Models Community License Agreement 2.1\n* For free commercial use, you only need to complete this form to get a Yi Model Commercial License.\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nMisc.\n=====",
"### Acknowledgments\n\n\nA heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation.\n\n\n ]",
"### Disclaimer\n\n\nWe use data compliance checking algorithms during the training process, to\nensure the compliance of the trained model to the best of our ability. Due to\ncomplex data and the diversity of language model usage scenarios, we cannot\nguarantee that the model will generate correct, and reasonable output in all\nscenarios. Please be aware that there is still a risk of the model producing\nproblematic outputs. We will not be responsible for any risks and issues\nresulting from misuse, misguidance, illegal usage, and related misinformation,\nas well as any associated data security concerns.\n\n\n [\n [Back to top ⬆️](#top) ]",
"### License\n\n\nThe source code in this repo is licensed under the Apache 2.0\nlicense. The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the Yi Series Models Community License Agreement 2.1.\nFor free commercial use, you only need to send an email to get official commercial permission.\n\n\n [\n [Back to top ⬆️](#top) ]"
] |
[
"TAGS\n#gguf #arxiv-2403.04652 #arxiv-2311.16502 #arxiv-2401.11944 #region-us \n",
"### Building the Next Generation of Open-Source and Bilingual LLMs\n\n\n\n\n[Hugging Face](URL target=) • [ModelScope](URL target=) • ️ [WiseModel](URL target=)\n\n\n\n\n Ask questions or discuss ideas on [GitHub](01-ai/Yi · Discussions) \n\n\n\n\n Join us on [Discord](URL target=) or [WeChat](有官方的微信群嘛 · Issue #43 · 01-ai/Yi) \n\n\n\n\n Check out [Grow at [Yi Learning Hub](#learning-hub)](URL Yi Tech Report </a>\n</p> \n<p align=)\n\n\n\n\n---\n\n\n\n Table of Contents\n* What is Yi?\n\t+ Introduction\n\t+ Models\n\t\t- Chat models\n\t\t- Base models\n\t\t- Model info\n\t+ News\n* How to use Yi?\n\t+ Quick start\n\t\t- Choose your path\n\t\t- pip\n\t\t- docker\n\t\t- URL\n\t\t- conda-lock\n\t\t- Web demo\n\t+ Fine-tuning\n\t+ Quantization\n\t+ Deployment\n\t+ Learning hub\n* Why Yi?\n\t+ Ecosystem\n\t\t- Upstream\n\t\t- Downstream\n\t\t\t* Serving\n\t\t\t* Quantization\n\t\t\t* Fine-tuning\n\t\t\t* API\n\t+ Benchmarks\n\t\t- Base model performance\n\t\t- Chat model performance\n\t+ Tech report\n\t\t- Citation\n* Who can use Yi?\n* Misc.\n\t+ Acknowledgements\n\t+ Disclaimer\n\t+ License\n\n\n\n\n\n---\n\n\nWhat is Yi?\n===========\n\n\nIntroduction\n------------\n\n\n* The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI.\n* Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,\n* Yi-34B-Chat model landed in second place (following GPT-4 Turbo), outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).\n* Yi-34B model ranked first among all existing open-source models (such as Falcon-180B, Llama-70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).\n* (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.\n\n\n If you're interested in Yi's adoption of Llama architecture and license usage policy, see Yi's relation with Llama. ⬇️ \n\n\n> \n> TL;DR\n> \n> \n> The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama.\n> \n> \n> \n\n+ Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.\n+ Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.\n+ Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.\n+ However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.\n\n\n\t- As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.\n\t- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the Alpaca Leaderboard in Dec 2023.\n\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nNews\n----\n\n\n\n **2024-03-16**: The `Yi-9B-200K` is open-sourced and available to the public.\n\n\n **2024-03-08**: [**2024-03-06**: The `Yi-9B` is open-sourced and available to the public.\n \n\n`Yi-9B` stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.\n\n\n **2024-01-23**: The Yi-VL models, `[`[Chat models](URL (based on data available up to January 2024).</li>\n</details>\n<details>\n<summary> <b>2023-11-23</b>: <a href=) are open-sourced and available to the public.`](URL and <code><a href=)`\n \nThis release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.\n* 'Yi-34B-Chat'\n* 'Yi-34B-Chat-4bits'\n* 'Yi-34B-Chat-8bits'\n* 'Yi-6B-Chat'\n* 'Yi-6B-Chat-4bits'\n* 'Yi-6B-Chat-8bits'\n\n\nYou can try some of them interactively at:\n\n\n* Hugging Face\n* Replicate\n\n\n\n\n **2023-11-23**: The Yi Series Models Community License Agreement is updated to [The base models,](URL\n</details>\n<details> \n<summary> <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary>\n<br>Application form:\n<ul>\n<li>English</li>\n<li>Chinese</li>\n</ul>\n</details>\n<details>\n<summary> <b>2023-11-05</b>: <a href=) `Yi-6B-200K` and `Yi-34B-200K`, are open-sourced and available to the public.\n \nThis release contains two base models with the same parameter sizes as the previous\nrelease, except that the context window is extended to 200K.\n\n\n **2023-11-02**: [The base models,](#base-models) `Yi-6B` and `Yi-34B`, are open-sourced and available to the public.\n \nThe first public release contains two bilingual (English/Chinese) base models\nwith the parameter sizes of 6B and 34B. Both of them are trained with 4K\nsequence length and can be extended to 32K during inference time.\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nModels\n------\n\n\nYi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.\n\n\nIf you want to deploy Yi models, make sure you meet the software and hardware requirements.",
"### Chat models\n\n\n\n - 4-bit series models are quantized by AWQ. \n - 8-bit series models are quantized by GPTQ \n - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090).",
"### Base models\n\n\n\n - 200k is roughly equivalent to 400,000 Chinese characters. \n - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run 'git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf' to download the weight.",
"### Model info\n\n\n* For chat and base models\n\n\nModel: 9B series models, Intro: It is the best at coding and math in the Yi series models., Default context window: Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.\nModel: 34B series models, Intro: They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability., Default context window: 3T\n\n\n* For chat models\n\n\nFor chat model limitations, see the explanations below. ⬇️\n\n\t \n\tThe released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.\n\t \n\tHowever, this higher diversity might amplify certain existing issues, including:\n\t+ Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.\n\t\n\t+ Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.\n\t\n\t+ Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.\n\t\n\t+ To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top\\_p, or top\\_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.](URL Tech Report</a> is published! </summary>\n</details>\n<details open>\n <summary> <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>\n <br>\nIn the )\n [\n [Back to top ⬆️](#top) ] \n\n\n\nHow to use Yi?\n==============\n\n\n* Quick start\n\t+ Choose your path\n\t+ pip\n\t+ docker\n\t+ conda-lock\n\t+ URL\n\t+ Web demo\n* Fine-tuning\n* Quantization\n* Deployment\n* Learning hub\n\n\nQuick start\n-----------\n\n\nGetting up and running with Yi models is simple with multiple choices available.",
"### Choose your path\n\n\nSelect one of the following paths to begin your journey with Yi!\n\n\n!Quick start - Choose your path",
"#### Deploy Yi locally\n\n\nIf you prefer to deploy Yi models locally,\n\n\n* ️ and you have sufficient resources (for example, NVIDIA A800 80GB), you can choose one of the following methods:\n\n\n\t+ pip\n\t+ Docker\n\t+ conda-lock\n* ️ and you have limited resources (for example, a MacBook Pro), you can use URL.",
"#### Not to deploy Yi locally\n\n\nIf you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options.",
"##### ️ Run Yi with APIs\n\n\nIf you want to explore more features of Yi, you can adopt one of these methods:\n\n\n* Yi APIs (Yi official)\n\n\n\t+ Early access has been granted to some applicants. Stay tuned for the next round of access!\n* Yi APIs (Replicate)",
"##### ️ Run Yi in playground\n\n\nIf you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options:\n\n\n* Yi-34B-Chat-Playground (Yi official)\n\n\n\t+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).\n* Yi-34B-Chat-Playground (Replicate)",
"##### ️ Chat with Yi\n\n\nIf you want to chat with Yi, you can use one of these online services, which offer a similar user experience:\n\n\n* Yi-34B-Chat (Yi official on Hugging Face)\n\n\n\t+ No registration is required.\n* Yi-34B-Chat (Yi official beta)\n\n\n\t+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Quick start - pip\n\n\nThis tutorial guides you through every step of running Yi-34B-Chat locally on an A800 (80G) and then performing inference.",
"#### Step 0: Prerequisites\n\n\n* Make sure Python 3.10 or a later version is installed.\n* If you want to run other Yi models, see software and hardware requirements.",
"#### Step 1: Prepare your environment\n\n\nTo set up the environment and install the required packages, execute the following command.",
"#### Step 2: Download the Yi model\n\n\nYou can download the weights and tokenizer of Yi models from the following sources:\n\n\n* Hugging Face\n* ModelScope\n* WiseModel",
"#### Step 3: Perform inference\n\n\nYou can perform inference with Yi chat or base models as below.",
"##### Perform inference with Yi chat model\n\n\n1. Create a file named 'quick\\_start.py' and copy the following content to it.\n2. Run 'quick\\_start.py'.\n\n\nThen you can see an output similar to the one below.",
"##### Perform inference with Yi base model\n\n\n* Yi-34B\n\n\nThe steps are similar to pip - Perform inference with Yi chat model.\n\n\nYou can use the existing file 'text\\_generation.py'.\n\n\nThen you can see an output similar to the one below.\n\n\n\nOutput. ⬇️ \n \n\nPrompt: Let me tell you an interesting story about cat Tom and mouse Jerry,\n\n\nGeneration: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up...\n* Yi-9B\n\n\nInput\n\n\nOutput\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Quick start - Docker\n\n\n\n Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️\n \nThis tutorial guides you through every step of running **Yi-34B-Chat on an A800 GPU** or **4\\*4090** locally and then performing inference.\n #### Step 0: Prerequisites\n\n\nMake sure you've installed [Step 1: Start Docker \n\n```\ndocker run -it --gpus all \\\n-v <your-model-path>: /models\nURL\n\n```\n\nAlternatively, you can pull the Yi Docker image from `URL",
"#### Step 2: Perform inference\n\n\nYou can perform inference with Yi chat or base models as below.",
"##### Perform inference with Yi chat model\n\n\nThe steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model).\n\n\n**Note** that the only difference is to set `model_path = '<your-model-mount-path>'` instead of `model_path = '<your-model-path>'`.",
"##### Perform inference with Yi base model\n\n\nThe steps are similar to [pip - Perform inference with Yi base model](#perform-inference-with-yi-base-model).\n\n\n**Note** that the only difference is to set `--model <your-model-mount-path>'` instead of `model <your-model-path>`.`](URL and <a href=)",
"### Quick start - conda-lock\n\n\n\nYou can use `[[* Step 0: Prerequisites\n* Step 1: Download URL\n* Step 2: Download Yi model\n* Step 3: Perform inference",
"#### Step 0: Prerequisites\n\n\n* This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip.\n* Make sure 'git-lfs' is installed on your machine.",
"#### Step 1: Download 'URL'\n\n\nTo clone the 'URL' repository, run the following command.",
"#### Step 2: Download Yi model\n\n\n2.1 To clone XeIaso/yi-chat-6B-GGUF with just pointers, run the following command.\n\n\n2.2 To download a quantized Yi model (yi-chat-6b.Q2\\_K.gguf), run the following command.",
"#### Step 3: Perform inference\n\n\nTo perform inference with the Yi model, you can use one of the following methods.\n\n\n* Method 1: Perform inference in terminal\n* Method 2: Perform inference in web",
"##### Method 1: Perform inference in terminal\n\n\nTo compile 'URL' using 4 threads and then conduct inference, navigate to the 'URL' directory, and run the following command.\n\n\n\n> \n> ##### Tips\n> \n> \n> * Replace '/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2\\_K.gguf' with the actual path of your model.\n> * By default, the model operates in completion mode.\n> * For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run './main -h' to check detailed descriptions and usage.\n> \n> \n> \n\n\nNow you have successfully asked a question to the Yi model and got an answer!",
"##### Method 2: Perform inference in web\n\n\n1. To initialize a lightweight and swift chatbot, run the following command.\n\n\nThen you can get an output like this:\n2. To access the chatbot interface, open your web browser and enter 'http://0.0.0.0:8080' into the address bar.\n\n\n!Yi model chatbot interface - URL\n3. Enter a question, such as \"How do you feed your pet fox? Please answer this question in 6 simple steps\" into the prompt window, and you will receive a corresponding answer.\n\n\n!Ask a question to Yi model - URL](URL for installing these dependencies.\n<br>\nTo install the dependencies, follow these steps:\n<ol>\n<li>\n<p>Install micromamba by following the instructions available <a href=\"URL</p>\n</li>\n<li>\n<p>Execute <code>micromamba install -y -n yi -f URL</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.</p>\n</li>\n</ol>\n</details>\n<h3>Quick start - URL</h3>\n<details>\n<summary> Run Yi-chat-6B-2bits locally with URL: a step-by-step guide. ⬇️</summary> \n<br>This tutorial guides you through every step of running a quantized model (<a href=)](URL to generate fully reproducible lock files for conda environments. ⬇️</summary>\n<br>\nYou can refer to <a href=)`\n [\n [Back to top ⬆️](#top) ]",
"### Web demo\n\n\nYou can build a web UI demo for Yi chat models (note that Yi base models are not supported in this senario).\n\n\nStep 1: Prepare your environment.\n\n\nStep 2: Download the Yi model.\n\n\nStep 3. To start a web service locally, run the following command.\n\n\nYou can access the web UI by entering the address provided in the console into your browser.\n\n\n!Quick start - web demo\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Fine-tuning\n\n\nOnce finished, you can compare the finetuned model and the base model with the following command:\n\n\nFor advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ ### Finetune code for Yi 6B and 34B",
"#### Preparation",
"##### From Image\n\n\nBy default, we use a small dataset from BAAI/COIG to finetune the base model.\nYou can also prepare your customized dataset in the following 'jsonl' format:\n\n\nAnd then mount them in the container to replace the default ones:",
"##### From Local Server\n\n\nMake sure you have conda. If not, use\n\n\nThen, create a conda env:",
"#### Hardware Setup\n\n\nFor the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended.\n\n\nFor the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA\\_VISIBLE\\_DEVICES to limit the number of GPUs (as shown in scripts/run\\_sft\\_Yi\\_34b.sh).\n\n\nA typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA\\_VISIBLE\\_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB.",
"#### Quick Start\n\n\nDownload a LLM-base model to MODEL\\_PATH (6B and 34B). A typical folder of models is like:\n\n\nDownload a dataset from huggingface to local storage DATA\\_PATH, e.g. Dahoas/rm-static.\n\n\n'finetune/yi\\_example\\_dataset' has example datasets, which are modified from BAAI/COIG\n\n\n'cd' into the scripts folder, copy and paste the script, and run. For example:\n\n\nFor the Yi-6B base model, setting training\\_debug\\_steps=20 and num\\_train\\_epochs=4 can output a chat model, which takes about 20 minutes.\n\n\nFor the Yi-34B base model, it takes a relatively long time for initialization. Please be patient.",
"#### Evaluation\n\n\nThen you'll see the answer from both the base model and the finetuned model.\n\n\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Quantization",
"#### GPT-Q\n\n\nOnce finished, you can then evaluate the resulting model as follows:\n\n\nFor details, see the explanations below. ⬇️ #### GPT-Q quantization\n\n\nGPT-Q is a PTQ (Post-Training Quantization)\nmethod. It saves memory and provides potential speedups while retaining the accuracy\nof the model.\n\n\nYi models can be GPT-Q quantized without a lot of efforts.\nWe provide a step-by-step tutorial below.\n\n\nTo run GPT-Q, we will use AutoGPTQ and\nexllama.\nAnd the huggingface transformers has integrated optimum and auto-gptq to perform\nGPTQ quantization on language models.",
"##### Do Quantization\n\n\nThe 'quant\\_autogptq.py' script is provided for you to perform GPT-Q quantization:",
"##### Run Quantized Model\n\n\nYou can run a quantized model using the 'eval\\_quantized\\_model.py':",
"#### AWQ\n\n\nOnce finished, you can then evaluate the resulting model as follows:\n\n\nFor details, see the explanations below. ⬇️ #### AWQ quantization\n\n\nAWQ is a PTQ (Post-Training Quantization)\nmethod. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs.\n\n\nYi models can be AWQ quantized without a lot of efforts.\nWe provide a step-by-step tutorial below.\n\n\nTo run AWQ, we will use AutoAWQ.",
"##### Do Quantization\n\n\nThe 'quant\\_autoawq.py' script is provided for you to perform AWQ quantization:",
"##### Run Quantized Model\n\n\nYou can run a quantized model using the 'eval\\_quantized\\_model.py':\n\n\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Deployment\n\n\nIf you want to deploy Yi models, make sure you meet the software and hardware requirements.",
"#### Software requirements\n\n\nBefore using Yi quantized models, make sure you've installed the correct software listed below.",
"#### Hardware requirements\n\n\nBefore deploying Yi in your environment, make sure your hardware meets the following requirements.",
"##### Chat models\n\n\n\nBelow are detailed minimum VRAM requirements under different batch use cases.",
"##### Base models\n\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Learning hub\n\n\n\n If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️\n \n\nWelcome to the Yi learning hub!\n\n\nWhether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.\n\n\nThe content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions!\n\n\nAt the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below.\n\n\nWith all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning!",
"#### Tutorials",
"##### English tutorials",
"##### Chinese tutorials\n\n\n\n\nWhy Yi?\n=======\n\n\n* Ecosystem\n\t+ Upstream\n\t+ Downstream\n\t\t- Serving\n\t\t- Quantization\n\t\t- Fine-tuning\n\t\t- API\n* Benchmarks\n\t+ Chat model performance\n\t+ Base model performance\n\t\t- Yi-34B and Yi-34B-200K\n\t\t- Yi-9B\n\n\nEcosystem\n---------\n\n\nYi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity.\n\n\n* Upstream\n* Downstream\n\t+ Serving\n\t+ Quantization\n\t+ Fine-tuning\n\t+ API",
"### Upstream\n\n\nThe Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency.\n\n\nFor example, the Yi series models are saved in the format of the Llama model. You can directly use 'LlamaForCausalLM' and 'LlamaTokenizer' to load the model. For more information, see Use the chat model.\n\n\n [\n [Back to top ⬆️](#top) ]",
"### Downstream\n\n\n\n> \n> Tip\n> \n> \n> * Feel free to create a PR and share the fantastic work you've built using the Yi series models.\n> * To help others quickly understand your work, it is recommended to use the format of ': + '.\n> \n> \n>",
"#### Serving\n\n\nIf you want to get up with Yi in a few minutes, you can use the following services built upon Yi.\n\n\n* Yi-34B-Chat: you can chat with Yi using one of the following platforms:\n\n\n\t+ Yi-34B-Chat | Hugging Face\n\t+ Yi-34B-Chat | Yi Platform: Note that currently it's available through a whitelist. Welcome to apply (fill out a form in English or Chinese) and experience it firsthand!\n* Yi-6B-Chat (Replicate): you can use this model with more options by setting additional parameters and calling APIs.\n* ScaleLLM: you can use this service to run Yi models locally with added flexibility and customization.",
"#### Quantization\n\n\nIf you have limited computational capabilities, you can use Yi's quantized models as follows.\n\n\nThese quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage.\n\n\n* TheBloke/Yi-34B-GPTQ\n* TheBloke/Yi-34B-GGUF\n* TheBloke/Yi-34B-AWQ",
"#### Fine-tuning\n\n\nIf you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below.\n\n\n* TheBloke Models: this site hosts numerous fine-tuned models derived from various LLMs including Yi.\n\n\nThis is not an exhaustive list for Yi, but to name a few sorted on downloads:\n\n\n\t+ TheBloke/dolphin-2\\_2-yi-34b-AWQ\n\t+ TheBloke/Yi-34B-Chat-AWQ\n\t+ TheBloke/Yi-34B-Chat-GPTQ\n* SUSTech/SUS-Chat-34B: this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the Open LLM Leaderboard.\n* OrionStarAI/OrionStar-Yi-34B-Chat-Llama: this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the OpenCompass LLM Leaderboard.\n* NousResearch/Nous-Capybara-34B: this model is trained with 200K context length and 3 epochs on the Capybara dataset.",
"#### API\n\n\n* amazing-openai-api: this tool converts Yi model APIs into the OpenAI API format out of the box.\n* LlamaEdge: this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust.\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nTech report\n-----------\n\n\nFor detailed capabilities of the Yi series model, see Yi: Open Foundation Models by 01.AI.\n\n\nBenchmarks\n----------\n\n\n* Chat model performance\n* Base model performance",
"### Chat model performance\n\n\nYi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.\n\n\n!Chat model performance\n\n\n\n Evaluation methods and challenges. ⬇️ \n* Evaluation methods: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.\n* Zero-shot vs. few-shot: in chat models, the zero-shot approach is more commonly employed.\n* Evaluation strategy: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text.\n* Challenges faced: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.\n\n\n**\\***: C-Eval results are evaluated on the validation datasets",
"### Base model performance",
"#### Yi-34B and Yi-34B-200K\n\n\nThe Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more.\n\n\n!Base model performance\n\n\n\n Evaluation methods. ⬇️\n* Disparity in results: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.\n* Investigation findings: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.\n* Uniform benchmarking process: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content.\n* Efforts to retrieve unreported scores: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.\n* Extensive model evaluation: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension.\n* Special configurations: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category \"Math & Code\".\n* Falcon-180B caveat: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated.",
"#### Yi-9B\n\n\nYi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.\n\n\n!Yi-9B benchmark - details\n\n\n* In terms of overall ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - overall\n* In terms of coding ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - code\n* In terms of math ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - math\n* In terms of common sense and reasoning ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - text\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nWho can use Yi?\n===============\n\n\nEveryone!\n\n\n* The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the Yi Series Models Community License Agreement 2.1\n* For free commercial use, you only need to complete this form to get a Yi Model Commercial License.\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nMisc.\n=====",
"### Acknowledgments\n\n\nA heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation.\n\n\n ]",
"### Disclaimer\n\n\nWe use data compliance checking algorithms during the training process, to\nensure the compliance of the trained model to the best of our ability. Due to\ncomplex data and the diversity of language model usage scenarios, we cannot\nguarantee that the model will generate correct, and reasonable output in all\nscenarios. Please be aware that there is still a risk of the model producing\nproblematic outputs. We will not be responsible for any risks and issues\nresulting from misuse, misguidance, illegal usage, and related misinformation,\nas well as any associated data security concerns.\n\n\n [\n [Back to top ⬆️](#top) ]",
"### License\n\n\nThe source code in this repo is licensed under the Apache 2.0\nlicense. The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the Yi Series Models Community License Agreement 2.1.\nFor free commercial use, you only need to send an email to get official commercial permission.\n\n\n [\n [Back to top ⬆️](#top) ]"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Output_LayoutLMv3_v7
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1075
- Precision: 0.7928
- Recall: 0.8
- F1: 0.7964
- Accuracy: 0.9723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 2600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 9.09 | 100 | 0.4138 | 0.0 | 0.0 | 0.0 | 0.8962 |
| No log | 18.18 | 200 | 0.2709 | 0.1667 | 0.0273 | 0.0469 | 0.9014 |
| No log | 27.27 | 300 | 0.2003 | 0.6234 | 0.4364 | 0.5134 | 0.9360 |
| No log | 36.36 | 400 | 0.1711 | 0.6496 | 0.6909 | 0.6696 | 0.9481 |
| 0.3384 | 45.45 | 500 | 0.1624 | 0.6667 | 0.7273 | 0.6957 | 0.9498 |
| 0.3384 | 54.55 | 600 | 0.1502 | 0.6803 | 0.7545 | 0.7155 | 0.9550 |
| 0.3384 | 63.64 | 700 | 0.1428 | 0.7227 | 0.7818 | 0.7511 | 0.9602 |
| 0.3384 | 72.73 | 800 | 0.1452 | 0.7049 | 0.7818 | 0.7414 | 0.9550 |
| 0.3384 | 81.82 | 900 | 0.1260 | 0.7544 | 0.7818 | 0.7679 | 0.9671 |
| 0.0995 | 90.91 | 1000 | 0.1254 | 0.7544 | 0.7818 | 0.7679 | 0.9671 |
| 0.0995 | 100.0 | 1100 | 0.1211 | 0.7863 | 0.8364 | 0.8106 | 0.9706 |
| 0.0995 | 109.09 | 1200 | 0.1093 | 0.7739 | 0.8091 | 0.7911 | 0.9706 |
| 0.0995 | 118.18 | 1300 | 0.1081 | 0.7946 | 0.8091 | 0.8018 | 0.9723 |
| 0.0995 | 127.27 | 1400 | 0.1108 | 0.7778 | 0.8273 | 0.8018 | 0.9723 |
| 0.0608 | 136.36 | 1500 | 0.1115 | 0.7627 | 0.8182 | 0.7895 | 0.9706 |
| 0.0608 | 145.45 | 1600 | 0.1034 | 0.8053 | 0.8273 | 0.8161 | 0.9740 |
| 0.0608 | 154.55 | 1700 | 0.1050 | 0.7895 | 0.8182 | 0.8036 | 0.9723 |
| 0.0608 | 163.64 | 1800 | 0.1093 | 0.7739 | 0.8091 | 0.7911 | 0.9706 |
| 0.0608 | 172.73 | 1900 | 0.1043 | 0.7965 | 0.8182 | 0.8072 | 0.9723 |
| 0.0443 | 181.82 | 2000 | 0.1048 | 0.8036 | 0.8182 | 0.8108 | 0.9758 |
| 0.0443 | 190.91 | 2100 | 0.1067 | 0.8036 | 0.8182 | 0.8108 | 0.9758 |
| 0.0443 | 200.0 | 2200 | 0.1069 | 0.8036 | 0.8182 | 0.8108 | 0.9740 |
| 0.0443 | 209.09 | 2300 | 0.1083 | 0.7928 | 0.8 | 0.7964 | 0.9723 |
| 0.0443 | 218.18 | 2400 | 0.1079 | 0.7928 | 0.8 | 0.7964 | 0.9723 |
| 0.0381 | 227.27 | 2500 | 0.1076 | 0.7928 | 0.8 | 0.7964 | 0.9723 |
| 0.0381 | 236.36 | 2600 | 0.1075 | 0.7928 | 0.8 | 0.7964 | 0.9723 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["Noureddinesa/LayoutLmv3_v1"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "Output_LayoutLMv3_v7", "results": []}]}
|
Noureddinesa/Output_LayoutLMv3_v7
| null |
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:Noureddinesa/LayoutLmv3_v1",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-12T20:20:30+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #layoutlmv3 #token-classification #generated_from_trainer #dataset-Noureddinesa/LayoutLmv3_v1 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Output\_LayoutLMv3\_v7
======================
This model is a fine-tuned version of microsoft/layoutlmv3-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1075
* Precision: 0.7928
* Recall: 0.8
* F1: 0.7964
* Accuracy: 0.9723
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 12
* eval\_batch\_size: 12
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* training\_steps: 2600
### Training results
### Framework versions
* Transformers 4.29.2
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.13.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* training\\_steps: 2600",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.29.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #layoutlmv3 #token-classification #generated_from_trainer #dataset-Noureddinesa/LayoutLmv3_v1 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* training\\_steps: 2600",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.29.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
GladiusTn/mistral7b_ocr_to_xml
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-12T20:20:54+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"license": "apache-2.0", "library_name": "transformers"}
|
vicgalle/Configurable-Mistral-22B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:24:57+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
<img src="https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1/resolve/main/logo.png" alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 141B-A35B
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) that was trained using a novel alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691) with **7k instances** for **1.3 hours** on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the [`argilla/distilabel-capybara-dpo-7k-binarized`](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs.
> [!NOTE]
> This model was trained collaboratively between Argilla, KAIST, and Hugging Face
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English.
- **License:** Apache 2.0
- **Finetuned from model:** [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Dataset:** https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized
## Performance
Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911). The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.
| Model | MT Bench | IFEval | BBH | AGIEval |
|-----------------------------------------------------------------------------------------------------|---------:|-------:|------:|--------:|
| [zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) | 8.17 | 65.06 | 58.96 | 44.16 |
| [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) | 8.26 | 52.13 | 48.50 | 41.16 |
| [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8.30 | 55.08 | 45.31 | 47.68 |
## Intended uses & limitations
The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install 'transformers>=4.39.3'
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "system",
"content": "You are Zephyr, a helpful assistant.",
},
{"role": "user", "content": "Explain how Mixture of Experts work in language a child would understand."},
]
outputs = pipe(
messages,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
)
print(outputs[0]["generated_text"][-1]["content"])
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistral-community/Mixtral-8x22B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
## Citation
If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:
```
@misc{hong2024orpo,
title={ORPO: Monolithic Preference Optimization without Reference Model},
author={Jiwoo Hong and Noah Lee and James Thorne},
year={2024},
eprint={2403.07691},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
You may also wish to cite the creators of this model:
```
@misc{zephyr_141b,
author = {Alvaro Bartolome and Jiwoo Hong and Noah Lee and Kashif Rasul and Lewis Tunstall},
title = {Zephyr 141B A35B},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1}}
}
```
|
{"license": "apache-2.0", "tags": ["trl", "orpo", "generated_from_trainer"], "datasets": ["argilla/distilabel-capybara-dpo-7k-binarized"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model-index": [{"name": "zephyr-orpo-141b-A35b-v0.1", "results": []}]}
|
blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.2
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"trl",
"orpo",
"generated_from_trainer",
"conversational",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"arxiv:2403.07691",
"arxiv:2311.07911",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:25:20+00:00
|
[
"2403.07691",
"2311.07911"
] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<img src="URL alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Model Card for Zephyr 141B-A35B
===============================
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of mistral-community/Mixtral-8x22B-v0.1 that was trained using a novel alignment algorithm called Odds Ratio Preference Optimization (ORPO) with 7k instances for 1.3 hours on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the 'argilla/distilabel-capybara-dpo-7k-binarized' preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs.
>
> [!NOTE]
> This model was trained collaboratively between Argilla, KAIST, and Hugging Face
>
>
>
Model Details
-------------
### Model Description
* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.
* Language(s) (NLP): Primarily English.
* License: Apache 2.0
* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1
### Model Sources
* Repository: URL
* Dataset: URL
Performance
-----------
Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.
Intended uses & limitations
---------------------------
The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:
Bias, Risks, and Limitations
----------------------------
Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 32
* total\_train\_batch\_size: 32
* total\_eval\_batch\_size: 256
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: inverse\_sqrt
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.1
If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:
You may also wish to cite the creators of this model:
|
[
"### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1",
"### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:"
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1",
"### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:"
] |
text-to-image
|
diffusers
|
# Oussama
<Gallery />
## Model description
A superhero
## Download model
[Download](/Ily1h0/Oussama1/tree/main) them in the Files & versions tab.
|
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "-", "output": {"url": "images/Screenshot_20240412_202507_com.instagram.android.jpg"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0"}
|
Ily1h0/Oussama1
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | null |
2024-04-12T20:29:15+00:00
|
[] |
[] |
TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
|
# Oussama
<Gallery />
## Model description
A superhero
## Download model
Download them in the Files & versions tab.
|
[
"# Oussama\n\n<Gallery />",
"## Model description \n\nA superhero",
"## Download model\n\n\nDownload them in the Files & versions tab."
] |
[
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n",
"# Oussama\n\n<Gallery />",
"## Model description \n\nA superhero",
"## Download model\n\n\nDownload them in the Files & versions tab."
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers"}
|
hflog/ibivibiv-collosus_120b
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:37:50+00:00
|
[
"1910.09700"
] |
[
"en"
] |
TAGS
#transformers #safetensors #llama #text-generation #en #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #en #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
This model is a test using Intel Gaudi Habana 2 chips using DDP training. Trained over 8xGaudi-2 Chips using DDP and Deepspeed using the openassistant dataset.
# Prompt Format
This model uses ChatML as the prompt format.
```
<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant
```
|
{"license": "apache-2.0", "datasets": ["timdettmers/openassistant-guanaco"]}
|
hflog/ndavidson-Phi-2-openassistant
| null |
[
"safetensors",
"optimum_habana",
"dataset:timdettmers/openassistant-guanaco",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T20:37:55+00:00
|
[] |
[] |
TAGS
#safetensors #optimum_habana #dataset-timdettmers/openassistant-guanaco #license-apache-2.0 #region-us
|
This model is a test using Intel Gaudi Habana 2 chips using DDP training. Trained over 8xGaudi-2 Chips using DDP and Deepspeed using the openassistant dataset.
# Prompt Format
This model uses ChatML as the prompt format.
|
[
"# Prompt Format\n\nThis model uses ChatML as the prompt format."
] |
[
"TAGS\n#safetensors #optimum_habana #dataset-timdettmers/openassistant-guanaco #license-apache-2.0 #region-us \n",
"# Prompt Format\n\nThis model uses ChatML as the prompt format."
] |
text-generation
|
transformers
|
# Misted v2 7B
This is another version of [misted-7b](https://huggingface.co/walmart-the-bag/misted-7b). This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.
# Prompt Format
###### Alpaca
```
### Instruction:
### Response:
```
|
{"language": ["en", "es"], "license": "apache-2.0", "library_name": "transformers", "tags": ["code", "mistral", "merge", "slerp"]}
|
hflog/Walmart-the-bag-Misted-v2-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"merge",
"slerp",
"conversational",
"en",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:38:01+00:00
|
[] |
[
"en",
"es"
] |
TAGS
#transformers #safetensors #mistral #text-generation #code #merge #slerp #conversational #en #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Misted v2 7B
This is another version of misted-7b. This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.
# Prompt Format
###### Alpaca
|
[
"# Misted v2 7B\nThis is another version of misted-7b. This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.",
"# Prompt Format",
"###### Alpaca"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #code #merge #slerp #conversational #en #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Misted v2 7B\nThis is another version of misted-7b. This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.",
"# Prompt Format",
"###### Alpaca"
] |
text-generation
|
transformers
|

# Vision/multimodal capabilities:
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo.
* You can load the **mmproj** by using the corresponding section in the interface:

|
{"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Nitral-AI/Nyanade-Maid-7B", "Nitral-AI/Nyan-Stunna-7B"]}
|
hflog/Nitral-AI-Nyanade_Stunna-Maid-7B
| null |
[
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Nitral-AI/Nyanade-Maid-7B",
"base_model:Nitral-AI/Nyan-Stunna-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:38:10+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #gguf #mistral #text-generation #mergekit #merge #base_model-Nitral-AI/Nyanade-Maid-7B #base_model-Nitral-AI/Nyan-Stunna-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!image/jpeg
# Vision/multimodal capabilities:
If you want to use vision functionality:
* You must use the latest versions of Koboldcpp.
To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo.
* You can load the mmproj by using the corresponding section in the interface:
!image/png
|
[
"# Vision/multimodal capabilities:\n\n If you want to use vision functionality:\n\n * You must use the latest versions of Koboldcpp.\n \nTo use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo.\n \n * You can load the mmproj by using the corresponding section in the interface:\n\n !image/png"
] |
[
"TAGS\n#transformers #safetensors #gguf #mistral #text-generation #mergekit #merge #base_model-Nitral-AI/Nyanade-Maid-7B #base_model-Nitral-AI/Nyan-Stunna-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Vision/multimodal capabilities:\n\n If you want to use vision functionality:\n\n * You must use the latest versions of Koboldcpp.\n \nTo use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo.\n \n * You can load the mmproj by using the corresponding section in the interface:\n\n !image/png"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7391 | 0.5 | 1000 | 0.1019 |
| 0.1177 | 1.0 | 2000 | 0.0966 |
| 0.1105 | 1.5 | 3000 | 0.0951 |
| 0.1079 | 2.0 | 4000 | 0.0939 |
| 0.1054 | 2.5 | 5000 | 0.0934 |
| 0.1054 | 3.0 | 6000 | 0.0928 |
| 0.1026 | 3.5 | 7000 | 0.0925 |
| 0.1039 | 4.0 | 8000 | 0.0922 |
| 0.102 | 4.5 | 9000 | 0.0920 |
| 0.1017 | 5.0 | 10000 | 0.0918 |
| 0.1003 | 5.5 | 11000 | 0.0918 |
| 0.1014 | 6.0 | 12000 | 0.0916 |
| 0.0993 | 6.5 | 13000 | 0.0916 |
| 0.101 | 7.0 | 14000 | 0.0914 |
| 0.0999 | 7.5 | 15000 | 0.0914 |
| 0.0994 | 8.0 | 16000 | 0.0913 |
| 0.1002 | 8.5 | 17000 | 0.0913 |
| 0.0986 | 9.0 | 18000 | 0.0913 |
| 0.0995 | 9.5 | 19000 | 0.0913 |
| 0.0987 | 10.0 | 20000 | 0.0913 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "t5-small", "model-index": [{"name": "results", "results": []}]}
|
ashwinradhe/results
| null |
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:38:15+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
results
=======
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0913
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Mixtral-8x7B--v0.1: Model 8
## Model Description
This model is the 8th extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DrNicefellow/Mistral-8-from-Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "Today is a pleasant"
input_ids = tokenizer.encode(text, return_tensors='pt')
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
{"license": "apache-2.0"}
|
hflog/DrNicefellow-Mistral-8-from-Mixtral-8x7B-v0.1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:38:22+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mixtral-8x7B--v0.1: Model 8
## Model Description
This model is the 8th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server here.
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
[
"# Mixtral-8x7B--v0.1: Model 8",
"## Model Description\n\nThis model is the 8th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mixtral-8x7B--v0.1: Model 8",
"## Model Description\n\nThis model is the 8th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
text-generation
|
transformers
|
# Mixtral-8x7B--v0.1: Model 7
## Model Description
This model is the 7th extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DrNicefellow/Mistral-3-from-Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "Today is a pleasant"
input_ids = tokenizer.encode(text, return_tensors='pt')
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
{"license": "apache-2.0"}
|
hflog/DrNicefellow-Mistral-7-from-Mixtral-8x7B-v0.1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:38:34+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mixtral-8x7B--v0.1: Model 7
## Model Description
This model is the 7th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server here.
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
[
"# Mixtral-8x7B--v0.1: Model 7",
"## Model Description\n\nThis model is the 7th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mixtral-8x7B--v0.1: Model 7",
"## Model Description\n\nThis model is the 7th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "roberta-base", "model-index": [{"name": "roberta-finetuned-squad", "results": []}]}
|
noushsuon/roberta-finetuned-squad
| null |
[
"transformers",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:38:41+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #roberta #question-answering #generated_from_trainer #base_model-roberta-base #license-mit #endpoints_compatible #region-us
|
# roberta-finetuned-squad
This model is a fine-tuned version of roberta-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.0
|
[
"# roberta-finetuned-squad\n\nThis model is a fine-tuned version of roberta-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.0"
] |
[
"TAGS\n#transformers #safetensors #roberta #question-answering #generated_from_trainer #base_model-roberta-base #license-mit #endpoints_compatible #region-us \n",
"# roberta-finetuned-squad\n\nThis model is a fine-tuned version of roberta-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.0"
] |
text-generation
|
transformers
|
# Mixtral-8x7B--v0.1: Model 6
## Model Description
This model is the 6th extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DrNicefellow/Mistral-6-from-Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "Today is a pleasant"
input_ids = tokenizer.encode(text, return_tensors='pt')
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
{"license": "apache-2.0"}
|
hflog/DrNicefellow-Mistral-6-from-Mixtral-8x7B-v0.1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:38:42+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mixtral-8x7B--v0.1: Model 6
## Model Description
This model is the 6th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server here.
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
[
"# Mixtral-8x7B--v0.1: Model 6",
"## Model Description\n\nThis model is the 6th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mixtral-8x7B--v0.1: Model 6",
"## Model Description\n\nThis model is the 6th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
text-generation
|
transformers
|
# Mixtral-8x7B--v0.1: Model 5
## Model Description
This model is the 5th extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DrNicefellow/Mistral-5-from-Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "Today is a pleasant"
input_ids = tokenizer.encode(text, return_tensors='pt')
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
{"license": "apache-2.0"}
|
hflog/DrNicefellow-Mistral-5-from-Mixtral-8x7B-v0.1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:38:49+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mixtral-8x7B--v0.1: Model 5
## Model Description
This model is the 5th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server here.
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
[
"# Mixtral-8x7B--v0.1: Model 5",
"## Model Description\n\nThis model is the 5th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mixtral-8x7B--v0.1: Model 5",
"## Model Description\n\nThis model is the 5th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
text-generation
|
transformers
|
# Mixtral-8x7B--v0.1: Model 4
## Model Description
This model is the 4th extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "Today is a pleasant"
input_ids = tokenizer.encode(text, return_tensors='pt')
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
{"license": "apache-2.0"}
|
hflog/DrNicefellow-Mistral-4-from-Mixtral-8x7B-v0.1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:38:58+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mixtral-8x7B--v0.1: Model 4
## Model Description
This model is the 4th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server here.
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
[
"# Mixtral-8x7B--v0.1: Model 4",
"## Model Description\n\nThis model is the 4th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mixtral-8x7B--v0.1: Model 4",
"## Model Description\n\nThis model is the 4th extracted standalone model from the mistralai/Mixtral-8x7B-v0.1, using the Mixtral Model Expert Extractor tool I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.",
"## Model Architecture\n\nThe architecture of this model includes:\n- Multi-head attention layers derived from the base Mixtral model.\n- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.\n- Additional layers and components as required to ensure the model's functionality outside the MoE framework.",
"### Example",
"## License\n\nThis model is available under the Apache 2.0 License.",
"## Discord Server\n\nJoin our Discord server here.",
"## License\n\nThis model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details."
] |
text-generation
|
transformers
|
## Rho-1: Not All Tokens Are What You Need
The Rho-1 series are pretrained language models that utilize Selective Language Modeling (SLM) objectives.
In math reasoning pretraining, SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.
For more details please check our [github](https://github.com/microsoft/rho) and [paper](https://arxiv.org/abs/2404.07965).
|
{"language": ["en"], "license": "mit", "tags": ["nlp", "math"], "pipeline_tag": "text-generation"}
|
hflog/microsoft-rho-math-1b-v0.1
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"nlp",
"math",
"en",
"arxiv:2404.07965",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T20:39:03+00:00
|
[
"2404.07965"
] |
[
"en"
] |
TAGS
#transformers #safetensors #llama #text-generation #nlp #math #en #arxiv-2404.07965 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Rho-1: Not All Tokens Are What You Need
The Rho-1 series are pretrained language models that utilize Selective Language Modeling (SLM) objectives.
In math reasoning pretraining, SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.
For more details please check our github and paper.
|
[
"## Rho-1: Not All Tokens Are What You Need\n\n\nThe Rho-1 series are pretrained language models that utilize Selective Language Modeling (SLM) objectives.\nIn math reasoning pretraining, SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.\n\n\nFor more details please check our github and paper."
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #nlp #math #en #arxiv-2404.07965 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Rho-1: Not All Tokens Are What You Need\n\n\nThe Rho-1 series are pretrained language models that utilize Selective Language Modeling (SLM) objectives.\nIn math reasoning pretraining, SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.\n\n\nFor more details please check our github and paper."
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** rmdhirr
- **License:** apache-2.0
- **Finetuned from model :** MTSAIR/multi_verse_model
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "dpo", "uncensored"], "base_model": "MTSAIR/multi_verse_model"}
|
hflog/rmdhirr-Pulsar_7B
| null |
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"uncensored",
"conversational",
"en",
"base_model:MTSAIR/multi_verse_model",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:39:13+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #uncensored #conversational #en #base_model-MTSAIR/multi_verse_model #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: rmdhirr
- License: apache-2.0
- Finetuned from model : MTSAIR/multi_verse_model
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: rmdhirr\n- License: apache-2.0\n- Finetuned from model : MTSAIR/multi_verse_model\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #uncensored #conversational #en #base_model-MTSAIR/multi_verse_model #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: rmdhirr\n- License: apache-2.0\n- Finetuned from model : MTSAIR/multi_verse_model\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** chatty123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "dpo"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
|
hflog/chatty123-mistral_rank32_dpo
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:39:24+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: chatty123
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** chatty123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
|
hflog/chatty123-mistral_rank32_invert
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:39:29+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: chatty123
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** chatty123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
|
hflog/chatty123-mistral_rank32_sft
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:39:35+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: chatty123
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** chatty123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "dpo"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
|
hflog/chatty123-mistral_rank16_dpo
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:39:46+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: chatty123
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
simonamdev/openai-whisper-base-jv-PeftType.LORA
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:42:59+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
suneeln-duke/dukebot-qa-v1
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:44:41+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
suneeln-duke/dukebot-qa-v1-merged
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-12T20:46:10+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** ahmetyaylalioglu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-13b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-2-13b-bnb-4bit"}
|
ahmetyaylalioglu/llama13v2_prompt_recover_model
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-13b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:49:20+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: ahmetyaylalioglu
- License: apache-2.0
- Finetuned from model : unsloth/llama-2-13b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: ahmetyaylalioglu\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: ahmetyaylalioglu\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-to-image
|
diffusers
|
# Lumina v.1.5
<Gallery />
## Model description
LuMiNa é um modelo de texto gerado por inteligência artificial desenvolvido pela SYNTHETICA. Este modelo é capaz de gerar textos contextualmente relevantes e coerentes, tornando-o uma ferramenta extremamente útil para diversas aplicações, como a criação de conteúdo para websites, blogs, redes sociais e muito mais. Com sua capacidade de aprendizado e adaptação contínua, o modelo LuMiNa é capaz de oferecer resultados cada vez mais precisos e relevantes, proporcionando uma experiência de usuário excepcional.
## Trigger words
You should use `lumina` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/synthetica/luminav1.5/tree/main) them in the Files & versions tab.
|
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "an collage of a woman in a cartoon sun", "output": {"url": "images/image (3).png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "lumina"}
|
synthetica/luminav1.5
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | null |
2024-04-12T20:53:43+00:00
|
[] |
[] |
TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
|
# Lumina v.1.5
<Gallery />
## Model description
LuMiNa é um modelo de texto gerado por inteligência artificial desenvolvido pela SYNTHETICA. Este modelo é capaz de gerar textos contextualmente relevantes e coerentes, tornando-o uma ferramenta extremamente útil para diversas aplicações, como a criação de conteúdo para websites, blogs, redes sociais e muito mais. Com sua capacidade de aprendizado e adaptação contínua, o modelo LuMiNa é capaz de oferecer resultados cada vez mais precisos e relevantes, proporcionando uma experiência de usuário excepcional.
## Trigger words
You should use 'lumina' to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
|
[
"# Lumina v.1.5\n\n<Gallery />",
"## Model description \n\nLuMiNa é um modelo de texto gerado por inteligência artificial desenvolvido pela SYNTHETICA. Este modelo é capaz de gerar textos contextualmente relevantes e coerentes, tornando-o uma ferramenta extremamente útil para diversas aplicações, como a criação de conteúdo para websites, blogs, redes sociais e muito mais. Com sua capacidade de aprendizado e adaptação contínua, o modelo LuMiNa é capaz de oferecer resultados cada vez mais precisos e relevantes, proporcionando uma experiência de usuário excepcional.",
"## Trigger words\n\nYou should use 'lumina' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
[
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n",
"# Lumina v.1.5\n\n<Gallery />",
"## Model description \n\nLuMiNa é um modelo de texto gerado por inteligência artificial desenvolvido pela SYNTHETICA. Este modelo é capaz de gerar textos contextualmente relevantes e coerentes, tornando-o uma ferramenta extremamente útil para diversas aplicações, como a criação de conteúdo para websites, blogs, redes sociais e muito mais. Com sua capacidade de aprendizado e adaptação contínua, o modelo LuMiNa é capaz de oferecer resultados cada vez mais precisos e relevantes, proporcionando uma experiência de usuário excepcional.",
"## Trigger words\n\nYou should use 'lumina' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndianLegalBERT
This model is a fine-tuned version of [law-ai/InLegalBERT](https://huggingface.co/law-ai/InLegalBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2872
- Accuracy: 0.8218
- Precision: 0.8227
- Recall: 0.8218
- Precision Macro: 0.7823
- Recall Macro: 0.7855
- Macro Fpr: 0.0158
- Weighted Fpr: 0.0152
- Weighted Specificity: 0.9773
- Macro Specificity: 0.9866
- Weighted Sensitivity: 0.8218
- Macro Sensitivity: 0.7855
- F1 Micro: 0.8218
- F1 Macro: 0.7809
- F1 Weighted: 0.8211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
| 1.1031 | 1.0 | 643 | 0.6873 | 0.7854 | 0.7628 | 0.7854 | 0.5923 | 0.6107 | 0.0201 | 0.0191 | 0.9691 | 0.9836 | 0.7854 | 0.6107 | 0.7854 | 0.5863 | 0.7674 |
| 0.5953 | 2.0 | 1286 | 0.6741 | 0.8195 | 0.8135 | 0.8195 | 0.7481 | 0.7363 | 0.0162 | 0.0155 | 0.9753 | 0.9863 | 0.8195 | 0.7363 | 0.8195 | 0.7377 | 0.8153 |
| 0.4673 | 3.0 | 1929 | 0.7955 | 0.8242 | 0.8206 | 0.8242 | 0.7588 | 0.7421 | 0.0157 | 0.0150 | 0.9749 | 0.9866 | 0.8242 | 0.7421 | 0.8242 | 0.7433 | 0.8204 |
| 0.2292 | 4.0 | 2572 | 0.8666 | 0.8280 | 0.8297 | 0.8280 | 0.7945 | 0.7864 | 0.0151 | 0.0146 | 0.9786 | 0.9871 | 0.8280 | 0.7864 | 0.8280 | 0.7840 | 0.8270 |
| 0.1583 | 5.0 | 3215 | 0.9898 | 0.8335 | 0.8348 | 0.8335 | 0.8115 | 0.7893 | 0.0147 | 0.0141 | 0.9778 | 0.9874 | 0.8335 | 0.7893 | 0.8335 | 0.7926 | 0.8308 |
| 0.0975 | 6.0 | 3858 | 1.1179 | 0.8218 | 0.8260 | 0.8218 | 0.8185 | 0.7573 | 0.0158 | 0.0152 | 0.9781 | 0.9867 | 0.8218 | 0.7573 | 0.8218 | 0.7656 | 0.8203 |
| 0.0529 | 7.0 | 4501 | 1.1545 | 0.8211 | 0.8205 | 0.8211 | 0.7916 | 0.7691 | 0.0160 | 0.0153 | 0.9758 | 0.9865 | 0.8211 | 0.7691 | 0.8211 | 0.7773 | 0.8203 |
| 0.0184 | 8.0 | 5144 | 1.2160 | 0.8234 | 0.8248 | 0.8234 | 0.7770 | 0.7829 | 0.0157 | 0.0151 | 0.9774 | 0.9867 | 0.8234 | 0.7829 | 0.8234 | 0.7771 | 0.8229 |
| 0.0186 | 9.0 | 5787 | 1.2777 | 0.8226 | 0.8244 | 0.8226 | 0.7882 | 0.7851 | 0.0157 | 0.0152 | 0.9774 | 0.9867 | 0.8226 | 0.7851 | 0.8226 | 0.7827 | 0.8223 |
| 0.007 | 10.0 | 6430 | 1.2872 | 0.8218 | 0.8227 | 0.8218 | 0.7823 | 0.7855 | 0.0158 | 0.0152 | 0.9773 | 0.9866 | 0.8218 | 0.7855 | 0.8218 | 0.7809 | 0.8211 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall"], "base_model": "law-ai/InLegalBERT", "model-index": [{"name": "IndianLegalBERT", "results": []}]}
|
xshubhamx/IndianLegalBERT
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:law-ai/InLegalBERT",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T20:55:42+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-law-ai/InLegalBERT #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
IndianLegalBERT
===============
This model is a fine-tuned version of law-ai/InLegalBERT on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2872
* Accuracy: 0.8218
* Precision: 0.8227
* Recall: 0.8218
* Precision Macro: 0.7823
* Recall Macro: 0.7855
* Macro Fpr: 0.0158
* Weighted Fpr: 0.0152
* Weighted Specificity: 0.9773
* Macro Specificity: 0.9866
* Weighted Sensitivity: 0.8218
* Macro Sensitivity: 0.7855
* F1 Micro: 0.8218
* F1 Macro: 0.7809
* F1 Weighted: 0.8211
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.1.2
* Datasets 2.1.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-law-ai/InLegalBERT #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] |
reinforcement-learning
|
ml-agents
|
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dragonflymoss/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
|
dragonflymoss/ppo-Huggy
| null |
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null |
2024-04-12T20:55:47+00:00
|
[] |
[] |
TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: dragonflymoss/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
|
[
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: dragonflymoss/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
[
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: dragonflymoss/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idefics2-8b-docvqa-finetuned-tutorial
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "idefics2-8b-docvqa-finetuned-tutorial", "results": []}]}
|
VictorSanh/idefics2-8b-docvqa-finetuned-tutorial
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T20:58:26+00:00
|
[] |
[] |
TAGS
#safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us
|
# idefics2-8b-docvqa-finetuned-tutorial
This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us \n",
"# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
thdangtr/blip_test
| null |
[
"transformers",
"safetensors",
"blip",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T21:00:45+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
# Model Card for Model ID
Model is a finetune of rhysjones/phi-2-orange-v2 on the cisco-sft dataset. Finetuned using 8xGaudi-2 AI Accelerator HPU chips using DDP Deepspeed.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Cisco
- **Funded by [optional]:** Cisco
- **Shared by [optional]:** Cisco
- **Model type:** QLoRA
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** rhysjones/phi-2-orange-v2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
{"license": "apache-2.0", "library_name": "peft", "datasets": ["ndavidson/cisco_sft"], "base_model": "rhysjones/phi-2-orange-v2"}
|
ndavidson/cisco-iNAM-phi-sft
| null |
[
"peft",
"safetensors",
"optimum_habana",
"dataset:ndavidson/cisco_sft",
"arxiv:1910.09700",
"base_model:rhysjones/phi-2-orange-v2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T21:03:58+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #safetensors #optimum_habana #dataset-ndavidson/cisco_sft #arxiv-1910.09700 #base_model-rhysjones/phi-2-orange-v2 #license-apache-2.0 #region-us
|
# Model Card for Model ID
Model is a finetune of rhysjones/phi-2-orange-v2 on the cisco-sft dataset. Finetuned using 8xGaudi-2 AI Accelerator HPU chips using DDP Deepspeed.
## Model Details
### Model Description
- Developed by: Cisco
- Funded by [optional]: Cisco
- Shared by [optional]: Cisco
- Model type: QLoRA
- Language(s) (NLP):
- License:
- Finetuned from model [optional]: rhysjones/phi-2-orange-v2
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
### Framework versions
- PEFT 0.6.2
|
[
"# Model Card for Model ID\n\nModel is a finetune of rhysjones/phi-2-orange-v2 on the cisco-sft dataset. Finetuned using 8xGaudi-2 AI Accelerator HPU chips using DDP Deepspeed.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: Cisco\n- Funded by [optional]: Cisco\n- Shared by [optional]: Cisco\n- Model type: QLoRA\n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]: rhysjones/phi-2-orange-v2",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure",
"### Framework versions\n\n\n- PEFT 0.6.2"
] |
[
"TAGS\n#peft #safetensors #optimum_habana #dataset-ndavidson/cisco_sft #arxiv-1910.09700 #base_model-rhysjones/phi-2-orange-v2 #license-apache-2.0 #region-us \n",
"# Model Card for Model ID\n\nModel is a finetune of rhysjones/phi-2-orange-v2 on the cisco-sft dataset. Finetuned using 8xGaudi-2 AI Accelerator HPU chips using DDP Deepspeed.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: Cisco\n- Funded by [optional]: Cisco\n- Shared by [optional]: Cisco\n- Model type: QLoRA\n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]: rhysjones/phi-2-orange-v2",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure",
"### Framework versions\n\n\n- PEFT 0.6.2"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# salon_image_classifier_v1_convnext
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0299
- Accuracy: 0.9898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1269 | 1.0 | 207 | 0.0501 | 0.9834 |
| 0.0572 | 2.0 | 415 | 0.0374 | 0.9871 |
| 0.0806 | 3.0 | 623 | 0.0336 | 0.9875 |
| 0.0589 | 4.0 | 831 | 0.0338 | 0.9885 |
| 0.0326 | 4.98 | 1035 | 0.0299 | 0.9898 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "salon_image_classifier_v1_convnext", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.989844278943805, "name": "Accuracy"}]}]}]}
|
100rab25/salon_image_classifier_v1_convnext
| null |
[
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T21:06:02+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
salon\_image\_classifier\_v1\_convnext
======================================
This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0299
* Accuracy: 0.9898
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-to-image
|
diffusers
|
# LoRA DreamBooth - squaadinc/1712955957282x527638104999566660
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
A photo of TOK
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'squaadinc/1712955957282x527638104999566660',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic A photo of TOK jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
{"tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora"], "datasets": ["Theuzs/Me"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of TOK", "inference": false}
|
squaadinc/1712955957282x527638104999566660
| null |
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:Theuzs/Me",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | null |
2024-04-12T21:06:11+00:00
|
[] |
[] |
TAGS
#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #dataset-Theuzs/Me #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
|
# LoRA DreamBooth - squaadinc/1712955957282x527638104999566660
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
To just use the base model, you can run:
|
[
"# LoRA DreamBooth - squaadinc/1712955957282x527638104999566660\nThese are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. \nThe weights were trained on the concept prompt: \n \nUse this keyword to trigger your custom model in your prompts. \nLoRA for the text encoder was enabled: False.\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Usage\nMake sure to upgrade diffusers to >= 0.19.0:\n\nIn addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:\n\nTo just use the base model, you can run:"
] |
[
"TAGS\n#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #dataset-Theuzs/Me #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n",
"# LoRA DreamBooth - squaadinc/1712955957282x527638104999566660\nThese are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. \nThe weights were trained on the concept prompt: \n \nUse this keyword to trigger your custom model in your prompts. \nLoRA for the text encoder was enabled: False.\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Usage\nMake sure to upgrade diffusers to >= 0.19.0:\n\nIn addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:\n\nTo just use the base model, you can run:"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** ahmetyaylalioglu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-13b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-2-13b-bnb-4bit"}
|
ahmetyaylalioglu/promptrecover_4bit_merged
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-2-13b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null |
2024-04-12T21:08:52+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: ahmetyaylalioglu
- License: apache-2.0
- Finetuned from model : unsloth/llama-2-13b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: ahmetyaylalioglu\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: ahmetyaylalioglu\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | null |
This is the Q4_0 quantum model for llama.cpp:
https://github.com/ggerganov/llama.cpp/pull/6515
|
{"license": "other", "license_name": "databricks-open-model-license", "license_link": "https://www.databricks.com/legal/open-model-license"}
|
phymbert/dbrx-16x12b-instruct-q4_0-gguf
| null |
[
"gguf",
"license:other",
"region:us"
] | null |
2024-04-12T21:08:53+00:00
|
[] |
[] |
TAGS
#gguf #license-other #region-us
|
This is the Q4_0 quantum model for URL:
URL
|
[] |
[
"TAGS\n#gguf #license-other #region-us \n"
] |
null | null |
This is the Q8_0 quantum model for llama.cpp:
https://github.com/ggerganov/llama.cpp/pull/6515
|
{"license": "other", "license_name": "databricks-open-model-license", "license_link": "https://www.databricks.com/legal/open-model-license"}
|
phymbert/dbrx-16x12b-instruct-q8_0-gguf
| null |
[
"gguf",
"license:other",
"region:us"
] | null |
2024-04-12T21:08:58+00:00
|
[] |
[] |
TAGS
#gguf #license-other #region-us
|
This is the Q8_0 quantum model for URL:
URL
|
[] |
[
"TAGS\n#gguf #license-other #region-us \n"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
adammoss/patch
| null |
[
"transformers",
"safetensors",
"patchgpt",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T21:10:09+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #patchgpt #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #patchgpt #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Fredithefish/MystixNoromaidx
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-GGUF/resolve/main/MystixNoromaidx.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "base_model": "Fredithefish/MystixNoromaidx", "quantized_by": "mradermacher"}
|
mradermacher/MystixNoromaidx-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:Fredithefish/MystixNoromaidx",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T21:14:17+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-Fredithefish/MystixNoromaidx #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-Fredithefish/MystixNoromaidx #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_idpo_same_6iters_iter_2
This model is a fine-tuned version of [ShenaoZ/0.0001_idpo_same_6iters_iter_1](https://huggingface.co/ShenaoZ/0.0001_idpo_same_6iters_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_idpo_same_6iters_iter_1", "model-index": [{"name": "0.0001_idpo_same_6iters_iter_2", "results": []}]}
|
ShenaoZ/0.0001_idpo_same_6iters_iter_2
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_idpo_same_6iters_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T21:15:43+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_idpo_same_6iters_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.0001_idpo_same_6iters_iter_2
This model is a fine-tuned version of ShenaoZ/0.0001_idpo_same_6iters_iter_1 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
[
"# 0.0001_idpo_same_6iters_iter_2\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_idpo_same_6iters_iter_1 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_idpo_same_6iters_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.0001_idpo_same_6iters_iter_2\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_idpo_same_6iters_iter_1 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-text-to-sql-flash-attention-2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.1", "model-index": [{"name": "Mistral-7B-text-to-sql-flash-attention-2", "results": []}]}
|
Cfmaley/Mistral-7B-text-to-sql-flash-attention-2
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T21:16:31+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us
|
# Mistral-7B-text-to-sql-flash-attention-2
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# Mistral-7B-text-to-sql-flash-attention-2\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us \n",
"# Mistral-7B-text-to-sql-flash-attention-2\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-roberta-base-finetuned-sql-classification-with_schema_question
This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4232
- Accuracy: 0.8337
- F1: 0.8617
- Precision: 0.7971
- Recall: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6156 | 1.0 | 1290 | 0.4973 | 0.7767 | 0.8292 | 0.7180 | 0.9811 |
| 0.4798 | 2.0 | 2580 | 0.4877 | 0.7860 | 0.8358 | 0.7253 | 0.9860 |
| 0.4841 | 3.0 | 3870 | 0.4767 | 0.7969 | 0.8400 | 0.7434 | 0.9656 |
| 0.4573 | 4.0 | 5160 | 0.4716 | 0.8225 | 0.8513 | 0.7921 | 0.92 |
| 0.4048 | 5.0 | 6450 | 0.4232 | 0.8337 | 0.8617 | 0.7971 | 0.9375 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "google/bigbird-roberta-base", "model-index": [{"name": "bigbird-roberta-base-finetuned-sql-classification-with_schema_question", "results": []}]}
|
PatWang/bigbird-roberta-base-finetuned-sql-classification-with_schema_question
| null |
[
"transformers",
"safetensors",
"big_bird",
"text-classification",
"generated_from_trainer",
"base_model:google/bigbird-roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T21:17:51+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #big_bird #text-classification #generated_from_trainer #base_model-google/bigbird-roberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bigbird-roberta-base-finetuned-sql-classification-with\_schema\_question
========================================================================
This model is a fine-tuned version of google/bigbird-roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4232
* Accuracy: 0.8337
* F1: 0.8617
* Precision: 0.7971
* Recall: 0.9375
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.37.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #big_bird #text-classification #generated_from_trainer #base_model-google/bigbird-roberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
cilantro9246/ilc26p8
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T21:22:02+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
feature-extraction
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
khairi/esm2_t30_150M_UR50D
| null |
[
"transformers",
"safetensors",
"esm",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T21:23:31+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #esm #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #esm #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
document-question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased_finetuned_docvqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.2266 | 0.22 | 50 | 4.7202 |
| 4.4028 | 0.44 | 100 | 4.1366 |
| 4.0483 | 0.66 | 150 | 3.8471 |
| 3.7553 | 0.88 | 200 | 3.5072 |
| 3.4046 | 1.11 | 250 | 3.5053 |
| 3.1543 | 1.33 | 300 | 3.1483 |
| 3.0577 | 1.55 | 350 | 3.0745 |
| 2.8076 | 1.77 | 400 | 2.8592 |
| 2.456 | 1.99 | 450 | 2.7019 |
| 1.9787 | 2.21 | 500 | 2.7553 |
| 1.9308 | 2.43 | 550 | 2.3374 |
| 1.7637 | 2.65 | 600 | 2.2109 |
| 1.7719 | 2.88 | 650 | 2.9137 |
| 1.6661 | 3.1 | 700 | 2.7946 |
| 1.3154 | 3.32 | 750 | 2.5750 |
| 1.3015 | 3.54 | 800 | 2.4883 |
| 1.2136 | 3.76 | 850 | 2.1466 |
| 1.253 | 3.98 | 900 | 2.2343 |
| 0.96 | 4.2 | 950 | 2.6481 |
| 0.9083 | 4.42 | 1000 | 2.2891 |
| 0.9441 | 4.65 | 1050 | 2.8459 |
| 0.9041 | 4.87 | 1100 | 3.0106 |
| 0.8727 | 5.09 | 1150 | 2.7765 |
| 0.6496 | 5.31 | 1200 | 3.0633 |
| 0.7388 | 5.53 | 1250 | 2.7464 |
| 0.5012 | 5.75 | 1300 | 3.3843 |
| 0.6762 | 5.97 | 1350 | 3.6035 |
| 0.4907 | 6.19 | 1400 | 3.4269 |
| 0.5893 | 6.42 | 1450 | 3.2352 |
| 0.4987 | 6.64 | 1500 | 3.2802 |
| 0.3867 | 6.86 | 1550 | 3.8191 |
| 0.6091 | 7.08 | 1600 | 3.4476 |
| 0.4088 | 7.3 | 1650 | 3.5099 |
| 0.4135 | 7.52 | 1700 | 3.4519 |
| 0.3859 | 7.74 | 1750 | 3.4147 |
| 0.335 | 7.96 | 1800 | 3.8082 |
| 0.2068 | 8.19 | 1850 | 4.3927 |
| 0.3149 | 8.41 | 1900 | 3.7065 |
| 0.2526 | 8.63 | 1950 | 3.6056 |
| 0.451 | 8.85 | 2000 | 3.7065 |
| 0.3792 | 9.07 | 2050 | 3.8738 |
| 0.2299 | 9.29 | 2100 | 3.8282 |
| 0.3064 | 9.51 | 2150 | 3.6586 |
| 0.26 | 9.73 | 2200 | 3.9155 |
| 0.3218 | 9.96 | 2250 | 3.5863 |
| 0.2826 | 10.18 | 2300 | 3.5095 |
| 0.149 | 10.4 | 2350 | 3.4537 |
| 0.1213 | 10.62 | 2400 | 3.8778 |
| 0.2157 | 10.84 | 2450 | 3.8106 |
| 0.2149 | 11.06 | 2500 | 4.2672 |
| 0.1212 | 11.28 | 2550 | 4.2534 |
| 0.1664 | 11.5 | 2600 | 4.3033 |
| 0.1487 | 11.73 | 2650 | 3.9483 |
| 0.1088 | 11.95 | 2700 | 3.9682 |
| 0.0791 | 12.17 | 2750 | 4.2143 |
| 0.0734 | 12.39 | 2800 | 4.2175 |
| 0.128 | 12.61 | 2850 | 4.2613 |
| 0.1851 | 12.83 | 2900 | 3.9094 |
| 0.1215 | 13.05 | 2950 | 4.2045 |
| 0.0438 | 13.27 | 3000 | 4.5802 |
| 0.0107 | 13.5 | 3050 | 4.7988 |
| 0.0606 | 13.72 | 3100 | 5.0228 |
| 0.0819 | 13.94 | 3150 | 4.9309 |
| 0.1375 | 14.16 | 3200 | 4.8995 |
| 0.0729 | 14.38 | 3250 | 4.7900 |
| 0.0832 | 14.6 | 3300 | 4.6417 |
| 0.0687 | 14.82 | 3350 | 4.8781 |
| 0.0516 | 15.04 | 3400 | 4.9231 |
| 0.056 | 15.27 | 3450 | 5.1059 |
| 0.0672 | 15.49 | 3500 | 5.0563 |
| 0.1036 | 15.71 | 3550 | 4.5889 |
| 0.1551 | 15.93 | 3600 | 4.4488 |
| 0.0509 | 16.15 | 3650 | 4.8964 |
| 0.0505 | 16.37 | 3700 | 4.9259 |
| 0.0295 | 16.59 | 3750 | 4.8617 |
| 0.0766 | 16.81 | 3800 | 4.7973 |
| 0.0103 | 17.04 | 3850 | 4.9249 |
| 0.0148 | 17.26 | 3900 | 4.9319 |
| 0.0105 | 17.48 | 3950 | 5.1000 |
| 0.0341 | 17.7 | 4000 | 4.9627 |
| 0.0547 | 17.92 | 4050 | 5.0149 |
| 0.0106 | 18.14 | 4100 | 5.0924 |
| 0.0045 | 18.36 | 4150 | 5.1550 |
| 0.0072 | 18.58 | 4200 | 5.1620 |
| 0.0428 | 18.81 | 4250 | 5.1546 |
| 0.0443 | 19.03 | 4300 | 5.1506 |
| 0.0244 | 19.25 | 4350 | 5.1720 |
| 0.0096 | 19.47 | 4400 | 5.1640 |
| 0.0265 | 19.69 | 4450 | 5.1647 |
| 0.0057 | 19.91 | 4500 | 5.1691 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "base_model": "microsoft/layoutlmv2-base-uncased", "model-index": [{"name": "layoutlmv2-base-uncased_finetuned_docvqa", "results": []}]}
|
shashin/layoutlmv2-base-uncased_finetuned_docvqa
| null |
[
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"base_model:microsoft/layoutlmv2-base-uncased",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T21:24:06+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #layoutlmv2 #document-question-answering #generated_from_trainer #base_model-microsoft/layoutlmv2-base-uncased #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
|
layoutlmv2-base-uncased\_finetuned\_docvqa
==========================================
This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 5.1691
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #layoutlmv2 #document-question-answering #generated_from_trainer #base_model-microsoft/layoutlmv2-base-uncased #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/MaziyarPanahi/Goku-8x22B-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Goku-8x22B-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q2_K.gguf.part2of2) | Q2_K | 52.2 | |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.IQ3_XS.gguf.part2of2) | IQ3_XS | 58.3 | |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.IQ3_S.gguf.part2of2) | IQ3_S | 61.6 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q3_K_S.gguf.part2of2) | Q3_K_S | 61.6 | |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.IQ3_M.gguf.part2of2) | IQ3_M | 64.6 | |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q3_K_M.gguf.part2of2) | Q3_K_M | 67.9 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q3_K_L.gguf.part2of2) | Q3_K_L | 72.7 | |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.IQ4_XS.gguf.part2of2) | IQ4_XS | 76.5 | |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q4_K_S.gguf.part2of2) | Q4_K_S | 80.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q4_K_M.gguf.part2of2) | Q4_K_M | 85.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q5_K_S.gguf.part2of2) | Q5_K_S | 97.1 | |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q5_K_M.gguf.part3of3) | Q5_K_M | 100.1 | |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q6_K.gguf.part3of3) | Q6_K | 115.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Goku-8x22B-v0.1-GGUF/resolve/main/Goku-8x22B-v0.1.Q8_0.gguf.part4of4) | Q8_0 | 149.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "mixtral", "sharegpt"], "datasets": ["philschmid/guanaco-sharegpt-style"], "model_name": "Goku-8x22B-v0.1", "base_model": "MaziyarPanahi/Goku-8x22B-v0.1", "model_creator": "MaziyarPanahi", "quantized_by": "mradermacher"}
|
mradermacher/Goku-8x22B-v0.1-GGUF
| null |
[
"transformers",
"moe",
"mixtral",
"sharegpt",
"en",
"dataset:philschmid/guanaco-sharegpt-style",
"base_model:MaziyarPanahi/Goku-8x22B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T21:27:39+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #moe #mixtral #sharegpt #en #dataset-philschmid/guanaco-sharegpt-style #base_model-MaziyarPanahi/Goku-8x22B-v0.1 #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #moe #mixtral #sharegpt #en #dataset-philschmid/guanaco-sharegpt-style #base_model-MaziyarPanahi/Goku-8x22B-v0.1 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.