pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification | keras-nlp | This is a [`Bert` model](https://keras.io/api/keras_nlp/models/bert) uploaded using the KerasNLP library.
This model is related to a `Classifier` task.
Model config:
* **name:** bert_backbone
* **trainable:** True
* **vocabulary_size:** 30522
* **num_layers:** 2
* **num_heads:** 2
* **hidden_dim:** 128
* **intermediate_dim:** 512
* **dropout:** 0.1
* **max_sequence_length:** 512
* **num_segments:** 2
This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
| {"library_name": "keras-nlp", "pipeline_tag": "text-classification"} | samanehs/test_bert | null | [
"keras-nlp",
"text-classification",
"region:us"
] | null | 2024-04-24T17:10:50+00:00 | [] | [] | TAGS
#keras-nlp #text-classification #region-us
| This is a 'Bert' model uploaded using the KerasNLP library.
This model is related to a 'Classifier' task.
Model config:
* name: bert_backbone
* trainable: True
* vocabulary_size: 30522
* num_layers: 2
* num_heads: 2
* hidden_dim: 128
* intermediate_dim: 512
* dropout: 0.1
* max_sequence_length: 512
* num_segments: 2
This model card has been generated automatically and should be completed by the model author. See Model Cards documentation for more information.
| [] | [
"TAGS\n#keras-nlp #text-classification #region-us \n"
] |
null | transformers |
# Uploaded model
- **Developed by:** vincentyandex
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | vincentyandex/llama3_8b_chsnovel_q8_0_bs8_step120 | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:11:12+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: vincentyandex
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: vincentyandex\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: vincentyandex\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
## Installation from source
```bash
git clone https://github.com/foundation-model-stack/fms-extras
cd fms-extras
pip install -e .
```
## Description
This model is intended to be used as an accelerator for [granite 7B (instruct lab)](https://huggingface.co/instructlab/granite-7b-lab) and takes inspiration
from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts
a single token in the draft based on both a state vector and sampled token
from the prior stage (the base model can be considered stage 0).
The state vector from the base model provides contextual information to the accelerator,
while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference.
Training is light-weight and can be completed in only a few days depending on base model size and speed.
## Repository Links
1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras)
2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference.git)
3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35)
## Samples
_Note: For all samples, your environment must have access to cuda_
### Production Server Sample
*To try this out running in a production-like environment, please use the pre-built docker image:*
#### Setup
```bash
HF_HUB_CACHE=/hf_hub_cache
chmod a+w $HF_HUB_CACHE
HF_HUB_TOKEN="your huggingface hub token"
TGIS_IMAGE=quay.io/wxpe/text-gen-server:main.ee927a4
docker pull $TGIS_IMAGE
# optionally download granite-7b-lab if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
instructlab/granite-7b-lab \
--token $HF_HUB_TOKEN
# optionally download the speculator model if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
ibm/granite-7b-lab-accelerator \
--token $HF_HUB_TOKEN
# note: if the weights were downloaded separately (not with the above commands), please place them in the HF_HUB_CACHE directory and refer to them with /models/<model_name>
docker run -d --rm --gpus all \
--name my-tgis-server \
-p 8033:8033 \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
-e MODEL_NAME=instructlab/granite-7b-lab \
-e SPECULATOR_NAME=ibm/granite-7b-lab-accelerator \
-e FLASH_ATTENTION=true \
-e PAGED_ATTENTION=true \
-e DTYPE=float16 \
$TGIS_IMAGE
# check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000"
docker logs my-tgis-server -f
# get the client sample (Note: The first prompt will take longer as there is a warmup time)
conda create -n tgis-client-env python=3.11
conda activate tgis-client-env
git clone --branch main --single-branch https://github.com/IBM/text-generation-inference.git
cd text-generation-inference/integration_tests
make gen-client
pip install . --no-cache-dir
```
#### Run Sample
```bash
python sample_client.py
```
_Note: first prompt may be slower as there is a slight warmup time_
### Minimal Sample
*To try this out with the fms-native compiled model, please execute the following:*
#### Install
```bash
git clone --branch ibm_7b_instruct_lab_variant --single-branch https://github.com/JRosenkranz/fms-extras.git
(cd fms-extras && pip install -e .)
pip install transformers==4.35.0 sentencepiece numpy
```
#### Run Sample
##### batch_size=1 (compile + cudagraphs)
```bash
MODEL_PATH=/path/to/instructlab/granite-7b-lab
python fms-extras/scripts/paged_speculative_inference.py \
--variant=7b.ibm_instruct_lab \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm/granite-7b-lab-accelerator \
--speculator_source=hf \
--speculator_variant=1_4b \
--top_k_tokens_per_head=4,3,2,2,2 \
--compile \
--compile_mode=reduce-overhead
```
##### batch_size=1 (compile)
```bash
MODEL_PATH=/path/to/instructlab/granite-7b-lab
python fms-extras/scripts/paged_speculative_inference.py \
--variant=7b.ibm_instruct_lab \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm/granite-7b-lab-accelerator \
--speculator_source=hf \
--speculator_variant=1_4b \
--top_k_tokens_per_head=4,3,2,2,2 \
--compile
```
##### batch_size=4 (compile)
```bash
MODEL_PATH=/path/to/instructlab/granite-7b-lab
python fms-extras/scripts/paged_speculative_inference.py \
--variant=7b.ibm_instruct_lab \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm/granite-7b-lab-accelerator \
--speculator_source=hf \
--speculator_variant=1_4b \
--top_k_tokens_per_head=4,3,2,2,2 \
--batch_input \
--compile
``` | {"license": "llama2"} | ibm/granite-7b-lab-accelerator | null | [
"transformers",
"safetensors",
"mlp_speculator",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:11:25+00:00 | [] | [] | TAGS
#transformers #safetensors #mlp_speculator #license-llama2 #endpoints_compatible #region-us
|
## Installation from source
## Description
This model is intended to be used as an accelerator for granite 7B (instruct lab) and takes inspiration
from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts
a single token in the draft based on both a state vector and sampled token
from the prior stage (the base model can be considered stage 0).
The state vector from the base model provides contextual information to the accelerator,
while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference.
Training is light-weight and can be completed in only a few days depending on base model size and speed.
## Repository Links
1. Paged Attention KV-Cache / Speculator
2. Production Server with speculative decoding
3. Speculator training
## Samples
_Note: For all samples, your environment must have access to cuda_
### Production Server Sample
*To try this out running in a production-like environment, please use the pre-built docker image:*
#### Setup
#### Run Sample
_Note: first prompt may be slower as there is a slight warmup time_
### Minimal Sample
*To try this out with the fms-native compiled model, please execute the following:*
#### Install
#### Run Sample
##### batch_size=1 (compile + cudagraphs)
##### batch_size=1 (compile)
##### batch_size=4 (compile)
| [
"## Installation from source",
"## Description\n\nThis model is intended to be used as an accelerator for granite 7B (instruct lab) and takes inspiration \nfrom the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts \na single token in the draft based on both a state vector and sampled token\nfrom the prior stage (the base model can be considered stage 0).\nThe state vector from the base model provides contextual information to the accelerator, \nwhile conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.\n\nNote: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference. \nTraining is light-weight and can be completed in only a few days depending on base model size and speed.",
"## Repository Links\n\n1. Paged Attention KV-Cache / Speculator\n2. Production Server with speculative decoding\n3. Speculator training",
"## Samples\n\n_Note: For all samples, your environment must have access to cuda_",
"### Production Server Sample\n\n*To try this out running in a production-like environment, please use the pre-built docker image:*",
"#### Setup",
"#### Run Sample\n\n\n\n_Note: first prompt may be slower as there is a slight warmup time_",
"### Minimal Sample\n\n*To try this out with the fms-native compiled model, please execute the following:*",
"#### Install",
"#### Run Sample",
"##### batch_size=1 (compile + cudagraphs)",
"##### batch_size=1 (compile)",
"##### batch_size=4 (compile)"
] | [
"TAGS\n#transformers #safetensors #mlp_speculator #license-llama2 #endpoints_compatible #region-us \n",
"## Installation from source",
"## Description\n\nThis model is intended to be used as an accelerator for granite 7B (instruct lab) and takes inspiration \nfrom the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts \na single token in the draft based on both a state vector and sampled token\nfrom the prior stage (the base model can be considered stage 0).\nThe state vector from the base model provides contextual information to the accelerator, \nwhile conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.\n\nNote: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference. \nTraining is light-weight and can be completed in only a few days depending on base model size and speed.",
"## Repository Links\n\n1. Paged Attention KV-Cache / Speculator\n2. Production Server with speculative decoding\n3. Speculator training",
"## Samples\n\n_Note: For all samples, your environment must have access to cuda_",
"### Production Server Sample\n\n*To try this out running in a production-like environment, please use the pre-built docker image:*",
"#### Setup",
"#### Run Sample\n\n\n\n_Note: first prompt may be slower as there is a slight warmup time_",
"### Minimal Sample\n\n*To try this out with the fms-native compiled model, please execute the following:*",
"#### Install",
"#### Run Sample",
"##### batch_size=1 (compile + cudagraphs)",
"##### batch_size=1 (compile)",
"##### batch_size=4 (compile)"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ripaaiii/fine-tune-C1-revised-lr6-boxkecil20_besar5 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:12:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2190
- Accuracy: 0.9235
- F1: 0.9234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7995 | 1.0 | 250 | 0.3161 | 0.9045 | 0.9017 |
| 0.254 | 2.0 | 500 | 0.2190 | 0.9235 | 0.9234 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.2
- Datasets 2.16.0
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9235, "name": "Accuracy"}, {"type": "f1", "value": 0.9233762889937281, "name": "F1"}]}]}]} | mikechen/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:13:15+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2190
* Accuracy: 0.9235
* F1: 0.9234
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.32.1
* Pytorch 2.2.2
* Datasets 2.16.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.2\n* Datasets 2.16.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.2\n* Datasets 2.16.0\n* Tokenizers 0.13.3"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | CMU-AIR2/math-deepseek-FULL-ArithHardC12-FTMWP-FULL | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:14:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
- Original model is [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)
- quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp)
## Ollama
Modelfile
```
FROM Llama-3-Open-Ko-8B-Q8_0.gguf
TEMPLATE """{{- if .System }}
<s>{{ .System }}</s>
{{- end }}
<s>Human:
{{ .Prompt }}</s>
<s>Assistant:
"""
SYSTEM """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions."""
PARAMETER temperature 0
PARAMETER num_predict 3000
PARAMETER num_ctx 4096
PARAMETER stop <s>
PARAMETER stop </s>
```
> Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
## Model Details
**Llama-3-Open-Ko-8B**
The Llama-3-Open-Ko-8B model is a continued pretrained language model based on the Llama-3-8B framework. This model is trained with over 60GB of deduplicated texts sourced from publicly available resources. With the new Llama-3 tokenizer, the model has been pretrained with more than 17.7B tokens, which is slightly more than that processed by the Korean tokenizer of Llama-2. Training was conducted on a TPUv5e-256, supported by Google's TRC program.
**Llama-3-Open-Ko-8B-Instruct-preview**
The Instruction model, named Llama-3-Open-Ko-8B-Instruct-preview, incorporates concepts from the [Chat Vector paper](https://arxiv.org/abs/2310.04799). This model is a preview and has not been fine-tuned with any Korean instruction set, making it a strong starting point for developing new chat and instruct models.
**Meta Llama-3**
Developed and released by Meta, the Meta Llama 3 family of large language models (LLMs) are optimized for dialogue use cases and excel across common industry benchmarks, emphasizing helpfulness and safety.
**Model Developers**: Junbum Lee (Beomi)
**Variations**: Llama-3-Open-Ko is available in one configuration — 8B.
**Input/Output**: Models accept text input and generate text and code.
**Model Architecture**: Llama 3 utilizes an optimized transformer architecture.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama-3-Open-Ko
</td>
<td rowspan="2" >Same as Open-Solar-Ko Dataset
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >17.7B+
</td>
<td>Jun, 2023
</td>
</tr>
</table>
*Dataset list available [here](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B/tree/main/corpus)
## Intended Use
**Commercial and Research Applications**: Llama 3 is designed for use in English, tailored for assistant-like chat in its instruction-tuned models, while the pretrained models are versatile across various natural language generation tasks.
**Out-of-scope**: Any use violating applicable laws, regulations, or the Acceptable Use Policy and Llama 3 Community License is prohibited.
### Responsibility & Safety
Meta's commitment to Responsible AI includes steps to limit misuse and harm while supporting the open source community. Developers are encouraged to implement safety best practices and use resources like [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) to tailor safety needs specifically to their use cases.
#### Responsible Release
Following a rigorous process against misuse, we ensure all safety and ethical guidelines are adhered to, as detailed in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
## Ethical Considerations and Limitations
Llama 3 is built on the principles of openness, inclusivity, and helpfulness, designed to be accessible and valuable across diverse backgrounds and use cases. Developers should undertake thorough safety testing and tuning for specific applications before deployment.
## Citation instructions
**Llama-3-Open-Ko**
```
@article{llama3openko,
title={Llama-3-Open-Ko},
author={L, Junbum},
year={2024},
url={https://huggingface.co/beomi/Llama-3-Open-Ko-8B}
}
```
**Original Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
| {"language": ["en", "ko"], "license": "llama3", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license"} | teddylee777/Llama-3-Open-Ko-8B-gguf | null | [
"transformers",
"gguf",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"arxiv:2310.04799",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:14:54+00:00 | [
"2310.04799"
] | [
"en",
"ko"
] | TAGS
#transformers #gguf #llama #text-generation #facebook #meta #pytorch #llama-3 #llama-3-ko #conversational #en #ko #arxiv-2310.04799 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| * Original model is beomi/Llama-3-Open-Ko-8B
* quantized using URL
Ollama
------
Modelfile
>
> Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & Llama-3-Open-Ko-8B-Instruct-preview
>
>
>
Model Details
-------------
Llama-3-Open-Ko-8B
The Llama-3-Open-Ko-8B model is a continued pretrained language model based on the Llama-3-8B framework. This model is trained with over 60GB of deduplicated texts sourced from publicly available resources. With the new Llama-3 tokenizer, the model has been pretrained with more than 17.7B tokens, which is slightly more than that processed by the Korean tokenizer of Llama-2. Training was conducted on a TPUv5e-256, supported by Google's TRC program.
Llama-3-Open-Ko-8B-Instruct-preview
The Instruction model, named Llama-3-Open-Ko-8B-Instruct-preview, incorporates concepts from the Chat Vector paper. This model is a preview and has not been fine-tuned with any Korean instruction set, making it a strong starting point for developing new chat and instruct models.
Meta Llama-3
Developed and released by Meta, the Meta Llama 3 family of large language models (LLMs) are optimized for dialogue use cases and excel across common industry benchmarks, emphasizing helpfulness and safety.
Model Developers: Junbum Lee (Beomi)
Variations: Llama-3-Open-Ko is available in one configuration — 8B.
Input/Output: Models accept text input and generate text and code.
Model Architecture: Llama 3 utilizes an optimized transformer architecture.
\*Dataset list available here
Intended Use
------------
Commercial and Research Applications: Llama 3 is designed for use in English, tailored for assistant-like chat in its instruction-tuned models, while the pretrained models are versatile across various natural language generation tasks.
Out-of-scope: Any use violating applicable laws, regulations, or the Acceptable Use Policy and Llama 3 Community License is prohibited.
### Responsibility & Safety
Meta's commitment to Responsible AI includes steps to limit misuse and harm while supporting the open source community. Developers are encouraged to implement safety best practices and use resources like Meta Llama Guard 2 and Code Shield to tailor safety needs specifically to their use cases.
#### Responsible Release
Following a rigorous process against misuse, we ensure all safety and ethical guidelines are adhered to, as detailed in our Responsible Use Guide.
Ethical Considerations and Limitations
--------------------------------------
Llama 3 is built on the principles of openness, inclusivity, and helpfulness, designed to be accessible and valuable across diverse backgrounds and use cases. Developers should undertake thorough safety testing and tuning for specific applications before deployment.
instructions
Llama-3-Open-Ko
Original Llama-3
| [
"### Responsibility & Safety\n\n\nMeta's commitment to Responsible AI includes steps to limit misuse and harm while supporting the open source community. Developers are encouraged to implement safety best practices and use resources like Meta Llama Guard 2 and Code Shield to tailor safety needs specifically to their use cases.",
"#### Responsible Release\n\n\nFollowing a rigorous process against misuse, we ensure all safety and ethical guidelines are adhered to, as detailed in our Responsible Use Guide.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nLlama 3 is built on the principles of openness, inclusivity, and helpfulness, designed to be accessible and valuable across diverse backgrounds and use cases. Developers should undertake thorough safety testing and tuning for specific applications before deployment.\n\n\ninstructions\n\n\nLlama-3-Open-Ko\n\n\nOriginal Llama-3"
] | [
"TAGS\n#transformers #gguf #llama #text-generation #facebook #meta #pytorch #llama-3 #llama-3-ko #conversational #en #ko #arxiv-2310.04799 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Responsibility & Safety\n\n\nMeta's commitment to Responsible AI includes steps to limit misuse and harm while supporting the open source community. Developers are encouraged to implement safety best practices and use resources like Meta Llama Guard 2 and Code Shield to tailor safety needs specifically to their use cases.",
"#### Responsible Release\n\n\nFollowing a rigorous process against misuse, we ensure all safety and ethical guidelines are adhered to, as detailed in our Responsible Use Guide.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nLlama 3 is built on the principles of openness, inclusivity, and helpfulness, designed to be accessible and valuable across diverse backgrounds and use cases. Developers should undertake thorough safety testing and tuning for specific applications before deployment.\n\n\ninstructions\n\n\nLlama-3-Open-Ko\n\n\nOriginal Llama-3"
] |
text-generation | transformers | # Стрела
Быстрая большая языковая модель, созданная специально для более быстрых диалогов.
# [В РАЗРАБОТКЕ] | {"language": ["ru", "en"], "library_name": "transformers", "datasets": ["0x7o/oasst2-ru-ppo"], "pipeline_tag": "text-generation"} | gai-labs/strela | null | [
"transformers",
"text-generation",
"ru",
"en",
"dataset:0x7o/oasst2-ru-ppo",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:14:59+00:00 | [] | [
"ru",
"en"
] | TAGS
#transformers #text-generation #ru #en #dataset-0x7o/oasst2-ru-ppo #endpoints_compatible #region-us
| # Стрела
Быстрая большая языковая модель, созданная специально для более быстрых диалогов.
# [В РАЗРАБОТКЕ] | [
"# Стрела\n\nБыстрая большая языковая модель, созданная специально для более быстрых диалогов.",
"# [В РАЗРАБОТКЕ]"
] | [
"TAGS\n#transformers #text-generation #ru #en #dataset-0x7o/oasst2-ru-ppo #endpoints_compatible #region-us \n",
"# Стрела\n\nБыстрая большая языковая модель, созданная специально для более быстрых диалогов.",
"# [В РАЗРАБОТКЕ]"
] |
null | transformers |
# Uploaded model
- **Developed by:** Willy030125
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b"} | Willy030125/LLaMa-3-8B-Alpaca-GGUF | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:19:20+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Willy030125
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Willy030125\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Willy030125\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Human_tiny_Seed103 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-24T17:19:25+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Human_tiny_Seed103 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-24T17:19:29+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ibibek/mynewmodel | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:20:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification | transformers | tags:
- vision
- image-classification
datasets:
- omarques/autotrain-data-dogs-and-cats | {"license": "cc-by-nc-4.0"} | akxier/perros_gatos | null | [
"transformers",
"pytorch",
"vit",
"image-classification",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:23:18+00:00 | [] | [] | TAGS
#transformers #pytorch #vit #image-classification #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
| tags:
- vision
- image-classification
datasets:
- omarques/autotrain-data-dogs-and-cats | [] | [
"TAGS\n#transformers #pytorch #vit #image-classification #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/macadeliccc/Opus-Samantha-Llama-3-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Opus-Samantha-Llama-3-8B-GGUF/resolve/main/Opus-Samantha-Llama-3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["macadeliccc/opus_samantha"], "base_model": "macadeliccc/Opus-Samantha-Llama-3-8B", "quantized_by": "mradermacher"} | mradermacher/Opus-Samantha-Llama-3-8B-GGUF | null | [
"transformers",
"gguf",
"en",
"dataset:macadeliccc/opus_samantha",
"base_model:macadeliccc/Opus-Samantha-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:25:33+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #dataset-macadeliccc/opus_samantha #base_model-macadeliccc/Opus-Samantha-Llama-3-8B #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #dataset-macadeliccc/opus_samantha #base_model-macadeliccc/Opus-Samantha-Llama-3-8B #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Weblet/phi-1.5-turbo1713979458374441_mlabonne-guanaco-llama2-1k_train | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:28:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
automatic-speech-recognition | transformers |
# Latvian Whisper tiny speech recognition model
This model has been merged to [RaivisDejus/whisper-tiny-lv](https://huggingface.co/RaivisDejus/whisper-tiny-lv) and will be removed in near future. Please use `RaivisDejus/whisper-tiny-lv`
To improve speech recognition quality, more data is needed, donate your voice on [Balsu talka](https://balsutalka.lv/)
| {"language": ["lv"], "license": "apache-2.0", "tags": ["Whisper", "CTranslate2"], "metrics": [{"name": "wer", "type": "wer", "value": 21.96}], "pipeline_tag": "automatic-speech-recognition"} | RaivisDejus/whisper-tiny-lv-ct2 | null | [
"transformers",
"Whisper",
"CTranslate2",
"automatic-speech-recognition",
"lv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:29:21+00:00 | [] | [
"lv"
] | TAGS
#transformers #Whisper #CTranslate2 #automatic-speech-recognition #lv #license-apache-2.0 #endpoints_compatible #region-us
|
# Latvian Whisper tiny speech recognition model
This model has been merged to RaivisDejus/whisper-tiny-lv and will be removed in near future. Please use 'RaivisDejus/whisper-tiny-lv'
To improve speech recognition quality, more data is needed, donate your voice on Balsu talka
| [
"# Latvian Whisper tiny speech recognition model\n\nThis model has been merged to RaivisDejus/whisper-tiny-lv and will be removed in near future. Please use 'RaivisDejus/whisper-tiny-lv'\n\nTo improve speech recognition quality, more data is needed, donate your voice on Balsu talka"
] | [
"TAGS\n#transformers #Whisper #CTranslate2 #automatic-speech-recognition #lv #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Latvian Whisper tiny speech recognition model\n\nThis model has been merged to RaivisDejus/whisper-tiny-lv and will be removed in near future. Please use 'RaivisDejus/whisper-tiny-lv'\n\nTo improve speech recognition quality, more data is needed, donate your voice on Balsu talka"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLPGroupProject-Finetune-bio-mobilebert-AL
This model is a fine-tuned version of [nlpie/bio-mobilebert](https://huggingface.co/nlpie/bio-mobilebert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0324
- Accuracy: 0.742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.3121 | 250 | 0.8727 | 0.727 |
| 35.354 | 0.6242 | 500 | 0.7830 | 0.738 |
| 35.354 | 0.9363 | 750 | 0.7660 | 0.745 |
| 0.8233 | 1.2484 | 1000 | 0.9794 | 0.744 |
| 0.8233 | 1.5605 | 1250 | 0.8635 | 0.746 |
| 0.7285 | 1.8727 | 1500 | 0.6671 | 0.747 |
| 0.7285 | 2.1848 | 1750 | 1.0348 | 0.758 |
| 0.5734 | 2.4969 | 2000 | 1.0761 | 0.747 |
| 0.5734 | 2.8090 | 2250 | 1.0324 | 0.742 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "nlpie/bio-mobilebert", "model-index": [{"name": "NLPGroupProject-Finetune-bio-mobilebert-AL", "results": []}]} | BenjaminTT/NLPGroupProject-Finetune-bio-mobilebert-AL | null | [
"transformers",
"safetensors",
"mobilebert",
"multiple-choice",
"generated_from_trainer",
"base_model:nlpie/bio-mobilebert",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:30:54+00:00 | [] | [] | TAGS
#transformers #safetensors #mobilebert #multiple-choice #generated_from_trainer #base_model-nlpie/bio-mobilebert #license-mit #endpoints_compatible #region-us
| NLPGroupProject-Finetune-bio-mobilebert-AL
==========================================
This model is a fine-tuned version of nlpie/bio-mobilebert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0324
* Accuracy: 0.742
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.2+cu118
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mobilebert #multiple-choice #generated_from_trainer #base_model-nlpie/bio-mobilebert #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | # Llama-3-Ko-8B-dare-ties
This is the series of 'Base + Language + Instruct', chat vector and various methods in mergekit.
Thanks again! @beomi
For more details about what is this model and why I'm doing this, check out this model's info [Instruct-vector-diff](https://huggingface.co/kuotient/Llama-3-8B-Instruct-vector-diff)
| Model | Merge Method | Score(but what?) |
|---|---|---|
| [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) | chat vector | - |
| [kuotient/Llama-3-Ko-8B-ties](https://huggingface.co/kuotient/Llama-3-Ko-8B-ties) | Ties | - |
| [kuotient/Llama-3-Ko-8B-dare-ties](https://huggingface.co/kuotient/Llama-3-Ko-8B-dare-ties) | Dare-ties | - |
| [kuotient/Llama-3-Ko-8B-TA](https://huggingface.co/kuotient/Llama-3-Ko-8B-TA) | Task Arithmetic(maybe...? not sure about this) | - |
| WIP | Model stock(I don't read this paper yet but still) | - |
The original paper author claims density should be around 0.2~0.3, but in reality high number gets some better result. You should try other params for better result than this!
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래?"},
]
---
피보나치 수열은 수학에서 가장 유명한 수열 중 하나입니다. 피보나치 수열은 다음과 같은 규칙으로 생성됩니다.
1. 첫 번째 수는 0, 두 번째 수는 1
2. 이후의 수는 이전 두 수의 합으로 계산
예를 들어, 피보나치 수열의 첫 10개 수는 다음과 같습니다.
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55,...
이 수열은 매우 빠르게 커지기 때문에, 피보나치 수열을 계산하는 데는 일반적으로 지수 함수를 사용합니다.
파이썬 코드를 작성하면 다음과 같습니다.
`(포맷이 깨져서..🤗)``
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
for i in range(10):
print(fibonacci(i))
`🤗``
이 코드는 피보나치 수열의 첫 10개 수를 계산합니다. 결과는 다음과 같습니다.
0
1
1
2
3
5
8
13
21
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "한국의 제헌 헌법에 대해서 자세히 설명해주세요."},
]
---
한국의 제헌 헌법은 1948년 7월 17일 제헌국회에서 의결하여 1948년 7월 27일 공포된 대한민국 헌법의 초안입니다. 제헌 헌법은 대한민국의 헌법적 기본질서를 규정하는 문서로, 대한민국의 국호, 영토, 국민, 국가의 형태, 정부의 조직, 국민의 권리와 의무, 법률의 제정 등에 대해 규정하고 있습니다. 제헌 헌법은 1948년 7월 17일 제헌국회에서 의결되어 1948년 7월 27일 공포된 이후 1952년 12월 5일부터 1962년 12월 5일까지 적용되었습니다.
제헌 헌법의 주요 내용은 다음과 같습니다.
1. 국호: 대한민국
2. 영토: 대한민국의 영토는 38도선 이북과 이남으로 나누어져 있으며, 38도선 이북의 영토는 대한민국의 영토로 인정하고 있습니다.
3. 국민: 대한민국의 국민은 모든 사람으로, 국적을 가진 사람과 국적을 가진 사람의 후손으로 구성됩니다.
4. 국가의 형태: 대한민국은 민주공화국으로, 대통령이 국가의 수반으로서 통치합니다.
5. 정부의 조직: 대통령, 국무총리, 국무위원 등으로 구성되는 정부를 두고 있습니다.
6. 국민의 권리와 의무: 대한민국의 국민은 평등한 권리를 가지며, 국적, 인권, 재산권, 표현의 자유 등 기본적 권리를 보장받습니다.
7. 법률의 제정: 대한민국의 법률은 국회에서 제정하고, 대통령이 공포합니다.
이와 같은 제헌 헌법은 대한민국의 헌법적 기본질서를 규정하는 문서로, 대한민국의 국호, 영토, 국민, 국가의 형태, 정부의 조직, 국민의 권리와 의무, 법률의 제정 등에 대해 규정하고 있습니다.
```
Repetitive results, but I **really really** like this output tho
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?"},
]
---
네, 세종대왕이 집현전에서 맥북을 던졌다는 이야기는 사실이 아닙니다. 이 이야기는 세종대왕이 훈민정음 창제 당시의 일화로, 당시의 학자들이 훈민정음의 창제에 반대하는 의견을 내세워 세종대왕에게 항의를 하자, 세종대왕이 그들을 질책하면서 훈민정음 창제를 강조한 일화로 전해집니다. 하지만 이 일화는 실제로 일어난 일은 아니며, 훈민정음 창제에 반대하는 학자들은 실제로 없었으며, 세종대왕도 훈민정음 창제를 강조한 적은 없습니다. 이 이야기는 후대에 만들어진 것으로 추정됩니다.
세종대왕은 훈민정음 창제에 대해 매우 열의를 내보였고, 훈민정음이 창제되기 전에는 한글을 사용하지 않았습니다. 훈민정음이 창제된 후에는 훈민정음에 기반한 한글을 사용했습니다. 훈민정음은 1443년부터 1446년까지 4년 동안에 걸쳐 창제되었습니다. 이 시기에 세종대왕은 훈민정음의 창제에 매우 관심을 가지고 있었으며, 훈민정음의 창제를 위해 학자들을 불러 모아 훈민정음의 창제에 참여하도록 했습니다. 훈민정음이 창제된 후에는 훈민정음에 기반한 한글을 사용하고, 한글을 사용하는 것을 장려했습니다.
세종대왕은 훈민정음의 창제를 통해 한글을 사용하게 되면, 모든 사람들이 쉽게 읽고 쓰는 글을 사용할 수 있게 되고, 이는 나라의 발전을 위해 매우 중요한 일이라고 생각했습니다. 또한, 훈민정음의 창제를 통해 한글을 사용하게 되면, 나라의 통일을 위해 매우 중요한 일이라고 생각했습니다. 이에 따라 세종대왕은 훈민정음의 창제에 많은 노력을 기울였습니다.
세종대왕이 훈민정음의 창제에 관심을 가졌다는 것은 역사적으로 확인할 수 있습니다. 하지만 세종대왕
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "나는 오늘 사과 5개를 갖고있어. 그리고 어제 나는 사과 2개를 먹었지. 내가 지금 가진 사과는 몇 개지?"},
]
---
네, 오늘 사과 5개 가지고 있고 어제 2개 먹었으니, 지금 가지고 있는 사과는 5개에서 2개를 뺀 3개입니다! 🍎👍
(이 경우에는 5개에서 2개를 뺀 결과인 3개가 정답입니다.)
```
# Llama-3-Ko-8B-dare-ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B
# no parameters necessary for base model
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.53
weight: 0.5
- model: beomi/Llama-3-Open-Ko-8B
parameters:
density: 0.53
weight: 0.5
merge_method: dare_ties
base_model: meta-llama/Meta-Llama-3-8B
dtype: bfloat16
``` | {"language": ["ko"], "license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["beomi/Llama-3-Open-Ko-8B", "meta-llama/Meta-Llama-3-8B-Instruct", "meta-llama/Meta-Llama-3-8B"], "license_name": "llama3"} | kuotient/Llama-3-Ko-8B-dare-ties | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"ko",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:beomi/Llama-3-Open-Ko-8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:31:36+00:00 | [
"2311.03099",
"2306.01708"
] | [
"ko"
] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #ko #arxiv-2311.03099 #arxiv-2306.01708 #base_model-beomi/Llama-3-Open-Ko-8B #base_model-meta-llama/Meta-Llama-3-8B-Instruct #base_model-meta-llama/Meta-Llama-3-8B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Llama-3-Ko-8B-dare-ties
=======================
This is the series of 'Base + Language + Instruct', chat vector and various methods in mergekit.
Thanks again! @beomi
For more details about what is this model and why I'm doing this, check out this model's info Instruct-vector-diff
Model: beomi/Llama-3-Open-Ko-8B-Instruct-preview, Merge Method: chat vector, Score(but what?): -
Model: kuotient/Llama-3-Ko-8B-ties, Merge Method: Ties, Score(but what?): -
Model: kuotient/Llama-3-Ko-8B-dare-ties, Merge Method: Dare-ties, Score(but what?): -
Model: kuotient/Llama-3-Ko-8B-TA, Merge Method: Task Arithmetic(maybe...? not sure about this), Score(but what?): -
Model: WIP, Merge Method: Model stock(I don't read this paper yet but still), Score(but what?): -
The original paper author claims density should be around 0.2~0.3, but in reality high number gets some better result. You should try other params for better result than this!
Repetitive results, but I really really like this output tho
Llama-3-Ko-8B-dare-ties
=======================
This is a merge of pre-trained language models created using mergekit.
Merge Details
-------------
### Merge Method
This model was merged using the DARE TIES merge method using meta-llama/Meta-Llama-3-8B as a base.
### Models Merged
The following models were included in the merge:
* beomi/Llama-3-Open-Ko-8B
* meta-llama/Meta-Llama-3-8B-Instruct
### Configuration
The following YAML configuration was used to produce this model:
| [
"### Merge Method\n\n\nThis model was merged using the DARE TIES merge method using meta-llama/Meta-Llama-3-8B as a base.",
"### Models Merged\n\n\nThe following models were included in the merge:\n\n\n* beomi/Llama-3-Open-Ko-8B\n* meta-llama/Meta-Llama-3-8B-Instruct",
"### Configuration\n\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #ko #arxiv-2311.03099 #arxiv-2306.01708 #base_model-beomi/Llama-3-Open-Ko-8B #base_model-meta-llama/Meta-Llama-3-8B-Instruct #base_model-meta-llama/Meta-Llama-3-8B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Merge Method\n\n\nThis model was merged using the DARE TIES merge method using meta-llama/Meta-Llama-3-8B as a base.",
"### Models Merged\n\n\nThe following models were included in the merge:\n\n\n* beomi/Llama-3-Open-Ko-8B\n* meta-llama/Meta-Llama-3-8B-Instruct",
"### Configuration\n\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "deepseek-ai/deepseek-coder-1.3b-instruct"} | CMU-AIR2/math-deepseek_LORA_HardArith_Interm-FTMWP-LORA | null | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-1.3b-instruct",
"region:us"
] | null | 2024-04-24T17:32:05+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #llama #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-1.3b-instruct #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #safetensors #llama #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-1.3b-instruct #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output1
This model is a fine-tuned version of [microsoft/deberta-v2-xxlarge](https://huggingface.co/microsoft/deberta-v2-xxlarge) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7690
- Accuracy: 0.676
- Macro F1: 0.6761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 1.5278 | 0.2286 | 100 | 1.1249 | 0.5146 | 0.4600 |
| 0.9452 | 0.4571 | 200 | 0.8437 | 0.645 | 0.6425 |
| 0.8367 | 0.6857 | 300 | 0.8038 | 0.6477 | 0.6531 |
| 0.8092 | 0.9143 | 400 | 0.7801 | 0.6593 | 0.6611 |
| 0.7679 | 1.1429 | 500 | 0.7868 | 0.6717 | 0.6697 |
| 0.7451 | 1.3714 | 600 | 0.7711 | 0.6647 | 0.6645 |
| 0.7467 | 1.6 | 700 | 0.7646 | 0.6659 | 0.6649 |
| 0.7261 | 1.8286 | 800 | 0.7840 | 0.6649 | 0.6632 |
| 0.7305 | 2.0571 | 900 | 0.7755 | 0.6681 | 0.6707 |
| 0.6742 | 2.2857 | 1000 | 0.7719 | 0.6691 | 0.6707 |
| 0.6728 | 2.5143 | 1100 | 0.7640 | 0.6726 | 0.6726 |
| 0.6691 | 2.7429 | 1200 | 0.7759 | 0.6761 | 0.6783 |
| 0.677 | 2.9714 | 1300 | 0.7690 | 0.676 | 0.6761 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v2-xxlarge", "model-index": [{"name": "output1", "results": []}]} | frankie699/output1 | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v2-xxlarge",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:32:22+00:00 | [] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v2-xxlarge #license-mit #autotrain_compatible #endpoints_compatible #region-us
| output1
=======
This model is a fine-tuned version of microsoft/deberta-v2-xxlarge on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7690
* Accuracy: 0.676
* Macro F1: 0.6761
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 6e-06
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 64
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.2
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 64\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v2-xxlarge #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 64\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# riddle-bot-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "riddle-bot-v1", "results": []}]} | mitultiwari/riddle-bot-v1 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-24T17:33:25+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# riddle-bot-v1
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# riddle-bot-v1\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# riddle-bot-v1\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | DarshanDeshpande/sherllama3_ia3 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:33:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | chillies/vistral-7b-vn-legal-chat | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:35:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Weblet/phi-1.5-turbo17139798851168182_mlabonne-guanaco-llama2-1k_train | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:35:31+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Julesb5/gemma-1.1-2b-it-peft-4-med1-r1 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:36:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
## Exllama v2 Quantizations of wavecoder-ultra-1.1-6.7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/microsoft/wavecoder-ultra-6.7b
Note: They updated the model in place, no idea if there's actually any changes, but tagging this 1.1 just in case
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<|begin▁of▁sentence|>[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST] <|end▁of▁sentence|>
```
## Available sizes
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/8_0) | 8.0 | 8.0 | 9.0 GB | 15.2 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/6_5) | 6.5 | 8.0 | 8.2 GB | 14.4 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/5_0) | 5.0 | 6.0 | 6.8 GB | 13.0 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
| [4_25](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/4_25) | 4.25 | 6.0 | 6.1 GB | 12.3 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/3_5) | 3.5 | 6.0 | 5.5 GB | 11.7 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2 wavecoder-ultra-1.1-6.7b-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/wavecoder-ultra-1.1-6.7b-exl2 --revision 6_5 --local-dir wavecoder-ultra-1.1-6.7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/wavecoder-ultra-1.1-6.7b-exl2 --revision 6_5 --local-dir wavecoder-ultra-1.1-6.7b-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski | {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["code"], "datasets": ["humaneval"], "metrics": ["code_eval"], "license_link": "https://huggingface.co/microsoft/wavecoder-ultra-6.7b/blob/main/LICENSE", "pipeline_tag": "text-generation", "quantized_by": "bartowski"} | bartowski/wavecoder-ultra-1.1-6.7b-exl2 | null | [
"transformers",
"code",
"text-generation",
"en",
"dataset:humaneval",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:36:56+00:00 | [] | [
"en"
] | TAGS
#transformers #code #text-generation #en #dataset-humaneval #license-mit #endpoints_compatible #region-us
| Exllama v2 Quantizations of wavecoder-ultra-1.1-6.7b
----------------------------------------------------
Using <a href="URL ExLlamaV2 v0.0.19 for quantization.
**The "main" branch only contains the URL, download one of the other branches for the model (see below)**
Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions.
Original model: URL
Note: They updated the model in place, no idea if there's actually any changes, but tagging this 1.1 just in case
Prompt format
-------------
No chat template specified so default is used. This may be incorrect, check original model card for details.
Available sizes
---------------
No GQA - VRAM requirements will be higher
Download instructions
---------------------
With git:
With huggingface hub (credit to TheBloke for instructions):
To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch:
Linux:
Windows (which apparently doesn't like \_ in folders sometimes?):
Want to support my work? Visit my ko-fi page here: URL
| [] | [
"TAGS\n#transformers #code #text-generation #en #dataset-humaneval #license-mit #endpoints_compatible #region-us \n"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-Pixelcopter-PLE-v0", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "28.50 +/- 17.02", "name": "mean_reward", "verified": false}]}]}]} | rahil1206/Reinforce-Pixelcopter-PLE-v0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-24T17:37:27+00:00 | [] | [] | TAGS
#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing Pixelcopter-PLE-v0
This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic-MARBERT-dialect-Identification-City-Model
This model is a fine-tuned version of [Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city](https://huggingface.co/Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8911
- F1-score: 0.5786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 442 | 1.5955 | 0.4477 |
| 1.8911 | 2.0 | 884 | 1.6004 | 0.5083 |
| 1.3056 | 3.0 | 1326 | 1.5384 | 0.5537 |
| 0.989 | 4.0 | 1768 | 1.7560 | 0.5450 |
| 0.7526 | 5.0 | 2210 | 1.8035 | 0.5572 |
| 0.5765 | 6.0 | 2652 | 1.8953 | 0.5764 |
| 0.4799 | 7.0 | 3094 | 2.3015 | 0.5486 |
| 0.3629 | 8.0 | 3536 | 2.7364 | 0.5368 |
| 0.3629 | 9.0 | 3978 | 2.8416 | 0.5442 |
| 0.277 | 10.0 | 4420 | 2.8911 | 0.5786 |
| 0.2196 | 11.0 | 4862 | 3.2772 | 0.5304 |
| 0.1656 | 12.0 | 5304 | 3.1206 | 0.5394 |
| 0.1555 | 13.0 | 5746 | 3.4600 | 0.5447 |
| 0.1315 | 14.0 | 6188 | 3.4378 | 0.5445 |
| 0.1039 | 15.0 | 6630 | 3.5247 | 0.5409 |
| 0.0796 | 16.0 | 7072 | 3.6359 | 0.5419 |
| 0.0683 | 17.0 | 7514 | 3.6723 | 0.5339 |
| 0.0683 | 18.0 | 7956 | 3.6808 | 0.5435 |
| 0.0567 | 19.0 | 8398 | 3.7677 | 0.5307 |
| 0.0382 | 20.0 | 8840 | 3.8121 | 0.5306 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "base_model": "Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city", "model-index": [{"name": "Arabic-MARBERT-dialect-Identification-City-Model", "results": []}]} | yemen2016/Arabic-MARBERT-dialect-Identification-City-Model | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:41:47+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city #autotrain_compatible #endpoints_compatible #region-us
| Arabic-MARBERT-dialect-Identification-City-Model
================================================
This model is a fine-tuned version of Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.8911
* F1-score: 0.5786
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-eLife
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0883
- Rouge1: 0.3665
- Rouge2: 0.0826
- Rougel: 0.3524
- Bertscore: 0.8322
- Readability: 8.6631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bertscore | Readability |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-----------:|
| 2.1756 | 1.0 | 544 | 2.1228 | 0.3665 | 0.0834 | 0.3529 | 0.8339 | 8.6929 |
| 2.1907 | 2.0 | 1088 | 2.0947 | 0.3545 | 0.0804 | 0.3418 | 0.8290 | 8.5141 |
| 2.1566 | 3.0 | 1632 | 2.0883 | 0.3665 | 0.0826 | 0.3524 | 0.8322 | 8.6631 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.1
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/flan-t5-base", "model-index": [{"name": "flan-t5-base-eLife", "results": []}]} | tanishq1420/flan-t5-base-eLife | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:44:02+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| flan-t5-base-eLife
==================
This model is a fine-tuned version of google/flan-t5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0883
* Rouge1: 0.3665
* Rouge2: 0.0826
* Rougel: 0.3524
* Bertscore: 0.8322
* Readability: 8.6631
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.35.2
* Pytorch 2.2.1
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.2.1\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.2.1\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [ai-maker-space/riddle-bot-v1](https://huggingface.co/ai-maker-space/riddle-bot-v1)
* [ai-maker-space/leagaleasy-llama-3-instruct-v2](https://huggingface.co/ai-maker-space/leagaleasy-llama-3-instruct-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
- model: ai-maker-space/leagaleasy-llama-3-instruct-v2
parameters:
density: 0.5
weight: 0.5
- model: ai-maker-space/riddle-bot-v1
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
normalize: false
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["ai-maker-space/riddle-bot-v1", "ai-maker-space/leagaleasy-llama-3-instruct-v2", "meta-llama/Meta-Llama-3-8B-Instruct"]} | mitultiwari/llama3-riddle-legal | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:ai-maker-space/riddle-bot-v1",
"base_model:ai-maker-space/leagaleasy-llama-3-instruct-v2",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:45:35+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-ai-maker-space/riddle-bot-v1 #base_model-ai-maker-space/leagaleasy-llama-3-instruct-v2 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using meta-llama/Meta-Llama-3-8B-Instruct as a base.
### Models Merged
The following models were included in the merge:
* ai-maker-space/riddle-bot-v1
* ai-maker-space/leagaleasy-llama-3-instruct-v2
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using meta-llama/Meta-Llama-3-8B-Instruct as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ai-maker-space/riddle-bot-v1\n* ai-maker-space/leagaleasy-llama-3-instruct-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-ai-maker-space/riddle-bot-v1 #base_model-ai-maker-space/leagaleasy-llama-3-instruct-v2 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using meta-llama/Meta-Llama-3-8B-Instruct as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ai-maker-space/riddle-bot-v1\n* ai-maker-space/leagaleasy-llama-3-instruct-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "microsoft/phi-2"} | vitorandrade/phi-2_PT_2.0 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"region:us"
] | null | 2024-04-24T17:47:30+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-microsoft/phi-2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-microsoft/phi-2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
reinforcement-learning | sample-factory |
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jeliasherrero/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
| {"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "11.77 +/- 4.90", "name": "mean_reward", "verified": false}]}]}]} | jeliasherrero/rl_course_vizdoom_health_gathering_supreme | null | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T17:47:36+00:00 | [] | [] | TAGS
#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
A(n) APPO model trained on the doom_health_gathering_supreme environment.
This model was trained using Sample-Factory 2.0: URL
Documentation for how to use Sample-Factory can be found at URL
## Downloading the model
After installing Sample-Factory, download the model with:
## Using the model
To run the model after download, use the 'enjoy' script corresponding to this environment:
You can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.
See URL for more details
## Training with this model
To continue training with this model, use the 'train' script corresponding to this environment:
Note, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at.
| [
"## Downloading the model\n\nAfter installing Sample-Factory, download the model with:",
"## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details",
"## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at."
] | [
"TAGS\n#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"## Downloading the model\n\nAfter installing Sample-Factory, download the model with:",
"## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details",
"## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | four-two-labs/nord-TinyLlama | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:47:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
## Llama 3 8B 64K
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<img src="https://huggingface.co/winglian/Llama-3-8b-64k-PoSE/resolve/main/output.png" />
This model uses [PoSE](https://huggingface.co/papers/2309.10400) to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0.
We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens.
We have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k.
This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. [WandB](https://wandb.ai/oaaic/llama-3-64k/runs/tkcyjt37)
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase.
### Use with transformers
See the snippet below for usage with Transformers:
```python
>>> import transformers
>>> import torch
>>> model_id = "meta-llama/Meta-Llama-3-8B"
>>> pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
>>> pipeline("Hey how are you doing today?")
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| {"language": ["en"], "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "axolotl"], "pipeline_tag": "text-generation"} | winglian/Llama-3-8b-64k-PoSE | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"axolotl",
"en",
"arxiv:2309.10400",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:48:54+00:00 | [
"2309.10400"
] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #axolotl #en #arxiv-2309.10400 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Llama 3 8B 64K
--------------
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
<img src="URL />
This model uses PoSE to extend Llama's context length from 8k to 64k @ rope\_theta: 500000.0.
We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens.
We have further set rope\_theta to 2M after continued pre-training to potentially further extend the context past 64k.
This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. WandB
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
See the snippet below for usage with Transformers:
### Use with 'llama3'
Please, follow the instructions in the repository.
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nSee the snippet below for usage with Transformers:",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository.\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #axolotl #en #arxiv-2309.10400 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Use with transformers\n\n\nSee the snippet below for usage with Transformers:",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository.\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
sentence-similarity | sentence-transformers |
# jonaschris2103/tiny_llama_embedder
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jonaschris2103/tiny_llama_embedder')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jonaschris2103/tiny_llama_embedder')
model = AutoModel.from_pretrained('jonaschris2103/tiny_llama_embedder')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jonaschris2103/tiny_llama_embedder)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False}) with Transformer model: LlamaModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | jonaschris2103/tiny_llama_embedder | null | [
"sentence-transformers",
"safetensors",
"llama",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:51:25+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #llama #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# jonaschris2103/tiny_llama_embedder
This is a sentence-transformers model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
| [
"# jonaschris2103/tiny_llama_embedder\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #llama #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# jonaschris2103/tiny_llama_embedder\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | transformers |
# Uploaded model
- **Developed by:** Mollel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "gguf"], "base_model": "unsloth/gemma-7b-bnb-4bit"} | Mollel/Swahili_Gemma_Q8_0 | null | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:52:30+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #gemma #text-generation-inference #unsloth #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Mollel
- License: apache-2.0
- Finetuned from model : unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Mollel\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #gemma #text-generation-inference #unsloth #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Mollel\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers.js |
YOLOv8n-pose with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8n-pose`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8n-pose';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [536.1322975158691, 37.87850737571716, 645.2879905700684, 286.9420547962189] with score 0.791
- nose: (445.81, 87.11) with score 0.936
- left_eye: (450.90, 80.87) with score 0.976
- right_eye: (439.37, 81.31) with score 0.664
- left_ear: (460.76, 81.94) with score 0.945
- left_shoulder: (478.06, 126.18) with score 0.993
- right_shoulder: (420.69, 125.17) with score 0.469
- left_elbow: (496.96, 178.36) with score 0.976
- left_wrist: (509.41, 232.75) with score 0.892
- left_hip: (469.15, 215.80) with score 0.980
- right_hip: (433.73, 218.39) with score 0.794
- left_knee: (471.45, 278.44) with score 0.969
- right_knee: (439.23, 281.77) with score 0.701
- left_ankle: (474.88, 345.49) with score 0.913
- right_ankle: (441.99, 339.82) with score 0.664
Found person at [-0.15300750732421875, 59.96129276752472, 158.73897552490234, 369.92224643230435] with score 0.863
- nose: (57.30, 95.37) with score 0.960
- left_eye: (63.85, 89.48) with score 0.889
- right_eye: (53.59, 91.60) with score 0.909
- left_ear: (73.54, 92.67) with score 0.626
- right_ear: (50.12, 95.95) with score 0.674
- left_shoulder: (87.62, 132.72) with score 0.965
- right_shoulder: (39.72, 136.82) with score 0.986
- left_elbow: (108.17, 186.58) with score 0.857
- right_elbow: (21.47, 184.66) with score 0.951
- left_wrist: (113.36, 244.21) with score 0.822
- right_wrist: (8.04, 240.50) with score 0.915
- left_hip: (83.47, 234.43) with score 0.990
- right_hip: (47.29, 237.45) with score 0.994
- left_knee: (92.12, 324.78) with score 0.985
- right_knee: (50.70, 325.75) with score 0.991
- left_ankle: (101.13, 410.45) with score 0.933
- right_ankle: (49.62, 410.14) with score 0.954
Found person at [104.13589477539062, 20.16922025680542, 505.84068298339844, 522.6950127601624] with score 0.770
- nose: (132.51, 99.38) with score 0.693
- left_eye: (138.68, 89.00) with score 0.451
- left_ear: (145.60, 85.21) with score 0.766
- left_shoulder: (188.92, 133.25) with score 0.996
- right_shoulder: (163.12, 158.90) with score 0.985
- left_elbow: (263.01, 205.18) with score 0.991
- right_elbow: (181.52, 249.12) with score 0.949
- left_wrist: (315.65, 259.88) with score 0.964
- right_wrist: (125.19, 275.10) with score 0.891
- left_hip: (279.47, 294.29) with score 0.998
- right_hip: (266.84, 309.38) with score 0.997
- left_knee: (261.67, 416.57) with score 0.989
- right_knee: (256.66, 428.75) with score 0.982
- left_ankle: (322.92, 454.74) with score 0.805
- right_ankle: (339.15, 459.64) with score 0.780
Found person at [423.3617973327637, 72.75799512863159, 638.2988166809082, 513.1156357765198] with score 0.903
- nose: (417.19, 137.27) with score 0.992
- left_eye: (429.74, 127.59) with score 0.975
- right_eye: (409.83, 129.06) with score 0.961
- left_ear: (445.81, 133.82) with score 0.847
- right_ear: (399.09, 132.99) with score 0.711
- left_shoulder: (451.43, 195.71) with score 0.997
- right_shoulder: (372.58, 196.25) with score 0.995
- left_elbow: (463.89, 286.56) with score 0.991
- right_elbow: (351.35, 260.40) with score 0.978
- left_wrist: (488.70, 367.36) with score 0.986
- right_wrist: (395.69, 272.20) with score 0.973
- left_hip: (435.84, 345.96) with score 0.999
- right_hip: (380.21, 355.38) with score 0.999
- left_knee: (454.88, 456.63) with score 0.994
- right_knee: (395.82, 478.67) with score 0.992
- left_ankle: (453.75, 556.37) with score 0.889
- right_ankle: (402.35, 582.09) with score 0.872
```
</details> | {"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/yolov8n-pose | null | [
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:52:47+00:00 | [] | [] | TAGS
#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us
|
YOLOv8n-pose with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/yolov8n-pose'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8n-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8n-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] |
null | transformers.js |
YOLOv8s-pose with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8s-pose`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8s-pose';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [533.1403350830078, 39.96531672477722, 645.8853149414062, 296.1657429695129] with score 0.739
- nose: (443.99, 91.98) with score 0.970
- left_eye: (449.84, 85.01) with score 0.968
- right_eye: (436.28, 86.54) with score 0.839
- left_ear: (458.69, 87.08) with score 0.822
- right_ear: (427.88, 89.20) with score 0.317
- left_shoulder: (471.29, 128.05) with score 0.991
- right_shoulder: (421.84, 127.22) with score 0.788
- left_elbow: (494.03, 174.09) with score 0.976
- right_elbow: (405.83, 162.81) with score 0.367
- left_wrist: (505.29, 232.06) with score 0.955
- right_wrist: (411.89, 213.05) with score 0.470
- left_hip: (469.48, 217.49) with score 0.978
- right_hip: (438.79, 216.48) with score 0.901
- left_knee: (474.03, 283.00) with score 0.957
- right_knee: (448.00, 287.90) with score 0.808
- left_ankle: (472.06, 339.67) with score 0.815
- right_ankle: (447.15, 340.44) with score 0.576
Found person at [0.03232002258300781, 57.89646775722503, 156.35095596313477, 370.9132190942764] with score 0.908
- nose: (60.48, 105.82) with score 0.975
- left_eye: (64.86, 100.59) with score 0.952
- right_eye: (55.12, 100.60) with score 0.855
- left_ear: (73.04, 101.96) with score 0.820
- right_ear: (51.07, 103.28) with score 0.482
- left_shoulder: (85.74, 137.77) with score 0.996
- right_shoulder: (42.04, 137.63) with score 0.988
- left_elbow: (101.10, 190.45) with score 0.988
- right_elbow: (25.75, 186.44) with score 0.937
- left_wrist: (115.93, 250.05) with score 0.975
- right_wrist: (7.39, 233.44) with score 0.918
- left_hip: (80.15, 242.20) with score 0.999
- right_hip: (52.69, 239.82) with score 0.999
- left_knee: (93.29, 326.00) with score 0.999
- right_knee: (57.42, 329.04) with score 0.998
- left_ankle: (100.24, 413.83) with score 0.992
- right_ankle: (50.47, 417.93) with score 0.988
Found person at [106.16920471191406, 8.419264698028565, 515.0135803222656, 530.6886708259583] with score 0.819
- nose: (134.03, 111.15) with score 0.921
- left_eye: (137.51, 100.95) with score 0.824
- right_eye: (131.82, 97.53) with score 0.489
- left_ear: (147.19, 92.96) with score 0.792
- left_shoulder: (188.28, 127.51) with score 0.993
- right_shoulder: (181.81, 149.32) with score 0.995
- left_elbow: (258.49, 199.10) with score 0.984
- right_elbow: (181.43, 251.27) with score 0.988
- left_wrist: (311.74, 257.93) with score 0.979
- right_wrist: (129.68, 284.38) with score 0.984
- left_hip: (267.43, 299.85) with score 1.000
- right_hip: (277.05, 307.50) with score 1.000
- left_knee: (232.15, 427.54) with score 0.999
- right_knee: (278.99, 453.09) with score 0.999
- left_ankle: (352.68, 457.89) with score 0.990
- right_ankle: (362.15, 554.69) with score 0.993
Found person at [425.3855133056641, 73.76281919479369, 640.6651306152344, 502.32841634750366] with score 0.876
- nose: (416.15, 149.68) with score 0.996
- left_eye: (430.34, 139.56) with score 0.984
- right_eye: (412.88, 142.56) with score 0.976
- left_ear: (446.59, 142.21) with score 0.843
- right_ear: (398.82, 144.52) with score 0.740
- left_shoulder: (436.54, 197.92) with score 0.999
- right_shoulder: (362.94, 210.20) with score 0.996
- left_elbow: (460.06, 293.80) with score 0.992
- right_elbow: (352.33, 262.09) with score 0.966
- left_wrist: (491.33, 364.20) with score 0.986
- right_wrist: (402.62, 272.23) with score 0.956
- left_hip: (429.79, 354.94) with score 0.999
- right_hip: (383.27, 372.77) with score 0.999
- left_knee: (461.07, 437.73) with score 0.998
- right_knee: (410.89, 522.05) with score 0.995
- left_ankle: (460.74, 552.53) with score 0.966
- right_ankle: (429.00, 560.54) with score 0.940
```
</details> | {"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/yolov8s-pose | null | [
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:52:50+00:00 | [] | [] | TAGS
#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us
|
YOLOv8s-pose with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/yolov8s-pose'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8s-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8s-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] |
null | transformers.js |
YOLOv8m-pose with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8m-pose`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8m-pose';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [535.503101348877, 39.878777217864986, 644.8351860046387, 346.3689248085022] with score 0.655
- nose: (444.86, 91.25) with score 0.912
- left_eye: (449.55, 79.71) with score 0.912
- right_eye: (436.53, 82.54) with score 0.689
- left_ear: (457.66, 83.08) with score 0.774
- left_shoulder: (476.25, 126.43) with score 0.984
- right_shoulder: (419.05, 129.94) with score 0.675
- left_elbow: (495.99, 180.55) with score 0.960
- left_wrist: (504.15, 233.96) with score 0.888
- left_hip: (469.08, 227.61) with score 0.961
- right_hip: (428.82, 228.95) with score 0.821
- left_knee: (474.97, 301.15) with score 0.919
- right_knee: (434.24, 305.24) with score 0.704
- left_ankle: (467.31, 384.83) with score 0.625
- right_ankle: (439.09, 379.35) with score 0.378
Found person at [-0.08985519409179688, 56.876064038276674, 158.62728118896484, 371.25909755229947] with score 0.902
- nose: (61.15, 102.21) with score 0.979
- left_eye: (66.59, 91.92) with score 0.939
- right_eye: (51.35, 95.02) with score 0.905
- left_ear: (70.82, 97.11) with score 0.778
- right_ear: (48.08, 97.46) with score 0.655
- left_shoulder: (84.60, 139.95) with score 0.997
- right_shoulder: (38.36, 139.32) with score 0.996
- left_elbow: (98.25, 196.80) with score 0.990
- right_elbow: (24.83, 188.15) with score 0.981
- left_wrist: (103.38, 252.91) with score 0.977
- right_wrist: (9.42, 233.04) with score 0.965
- left_hip: (82.91, 247.50) with score 0.999
- right_hip: (51.28, 248.31) with score 0.999
- left_knee: (85.25, 326.65) with score 0.997
- right_knee: (49.12, 330.50) with score 0.996
- left_ankle: (96.84, 419.45) with score 0.964
- right_ankle: (51.88, 416.89) with score 0.960
Found person at [109.41852569580077, 13.203005981445314, 505.06954193115234, 532.9905454635621] with score 0.911
- nose: (126.16, 102.84) with score 0.586
- left_eye: (125.44, 84.07) with score 0.352
- left_ear: (137.38, 77.79) with score 0.722
- left_shoulder: (181.75, 122.32) with score 0.997
- right_shoulder: (180.20, 152.15) with score 0.998
- left_elbow: (262.31, 202.36) with score 0.996
- right_elbow: (194.94, 277.60) with score 0.997
- left_wrist: (298.87, 269.32) with score 0.987
- right_wrist: (132.86, 281.44) with score 0.990
- left_hip: (272.70, 284.47) with score 1.000
- right_hip: (274.35, 307.48) with score 1.000
- left_knee: (247.66, 441.74) with score 0.997
- right_knee: (256.27, 500.82) with score 0.998
- left_ankle: (340.54, 455.33) with score 0.848
- right_ankle: (338.54, 543.24) with score 0.882
Found person at [425.35156250000006, 68.73829221725464, 640.3047943115234, 494.19192361831665] with score 0.901
- nose: (425.40, 147.53) with score 0.995
- left_eye: (432.33, 133.12) with score 0.985
- right_eye: (410.70, 135.98) with score 0.969
- left_ear: (440.72, 134.14) with score 0.901
- right_ear: (400.69, 134.89) with score 0.800
- left_shoulder: (455.11, 201.19) with score 1.000
- right_shoulder: (368.64, 201.60) with score 0.999
- left_elbow: (455.25, 292.03) with score 0.998
- right_elbow: (350.65, 258.24) with score 0.989
- left_wrist: (475.06, 370.36) with score 0.992
- right_wrist: (398.78, 263.84) with score 0.975
- left_hip: (441.94, 359.78) with score 1.000
- right_hip: (384.06, 368.70) with score 1.000
- left_knee: (462.74, 452.41) with score 0.998
- right_knee: (395.50, 488.42) with score 0.997
- left_ankle: (465.12, 540.38) with score 0.960
- right_ankle: (433.43, 569.37) with score 0.938
```
</details> | {"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/yolov8m-pose | null | [
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:52:54+00:00 | [] | [] | TAGS
#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us
|
YOLOv8m-pose with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/yolov8m-pose'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8m-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8m-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] |
null | transformers.js |
YOLOv8l-pose with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8l-pose`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8l-pose';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [539.2378807067871, 41.92433733940124, 642.9805946350098, 334.98332471847533] with score 0.727
- nose: (445.67, 84.43) with score 0.976
- left_eye: (451.88, 76.89) with score 0.983
- right_eye: (440.39, 76.33) with score 0.888
- left_ear: (463.89, 81.68) with score 0.837
- left_shoulder: (478.95, 123.91) with score 0.993
- right_shoulder: (419.52, 123.44) with score 0.694
- left_elbow: (501.07, 180.46) with score 0.979
- left_wrist: (504.60, 238.34) with score 0.950
- left_hip: (469.53, 220.77) with score 0.985
- right_hip: (431.21, 222.54) with score 0.875
- left_knee: (473.45, 302.16) with score 0.972
- right_knee: (432.61, 302.91) with score 0.759
- left_ankle: (467.74, 380.37) with score 0.874
- right_ankle: (438.06, 381.94) with score 0.516
Found person at [0.59722900390625, 59.435689163208, 157.59026527404785, 370.3985949516296] with score 0.927
- nose: (56.99, 100.53) with score 0.959
- left_eye: (63.46, 94.19) with score 0.930
- right_eye: (51.11, 96.48) with score 0.846
- left_ear: (73.43, 97.84) with score 0.798
- right_ear: (46.36, 99.41) with score 0.484
- left_shoulder: (84.93, 134.17) with score 0.988
- right_shoulder: (41.60, 133.96) with score 0.976
- left_elbow: (96.33, 189.89) with score 0.959
- right_elbow: (24.60, 192.73) with score 0.879
- left_wrist: (104.79, 258.62) with score 0.928
- right_wrist: (7.89, 238.55) with score 0.830
- left_hip: (83.23, 234.45) with score 0.993
- right_hip: (53.89, 235.50) with score 0.991
- left_knee: (87.80, 326.73) with score 0.988
- right_knee: (49.44, 327.89) with score 0.982
- left_ankle: (100.93, 416.88) with score 0.925
- right_ankle: (44.52, 421.24) with score 0.912
Found person at [112.88127899169922, 13.998864459991454, 504.09095764160156, 533.4011061668397] with score 0.943
- nose: (122.64, 98.36) with score 0.366
- left_ear: (132.43, 77.58) with score 0.794
- left_shoulder: (196.67, 124.78) with score 0.999
- right_shoulder: (176.97, 142.00) with score 0.998
- left_elbow: (256.79, 196.00) with score 0.998
- right_elbow: (182.85, 279.47) with score 0.994
- left_wrist: (305.44, 270.10) with score 0.982
- right_wrist: (129.72, 281.09) with score 0.963
- left_hip: (275.59, 290.38) with score 1.000
- right_hip: (263.91, 310.60) with score 1.000
- left_knee: (237.89, 445.88) with score 0.998
- right_knee: (249.66, 477.34) with score 0.998
- left_ankle: (349.25, 438.70) with score 0.940
- right_ankle: (338.20, 586.62) with score 0.935
Found person at [424.730339050293, 67.2046113729477, 639.5703506469727, 493.03533136844635] with score 0.944
- nose: (416.55, 141.74) with score 0.991
- left_eye: (428.51, 130.99) with score 0.962
- right_eye: (408.83, 130.86) with score 0.938
- left_ear: (441.95, 133.48) with score 0.832
- right_ear: (399.56, 133.27) with score 0.652
- left_shoulder: (440.79, 193.75) with score 0.999
- right_shoulder: (372.38, 208.42) with score 0.998
- left_elbow: (453.56, 290.07) with score 0.995
- right_elbow: (350.56, 262.83) with score 0.992
- left_wrist: (482.36, 363.64) with score 0.995
- right_wrist: (398.84, 267.30) with score 0.993
- left_hip: (435.96, 362.27) with score 0.999
- right_hip: (388.40, 383.41) with score 0.999
- left_knee: (460.50, 425.60) with score 0.994
- right_knee: (403.19, 516.76) with score 0.992
- left_ankle: (459.31, 558.19) with score 0.893
- right_ankle: (426.29, 552.55) with score 0.868
```
</details> | {"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/yolov8l-pose | null | [
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:52:59+00:00 | [] | [] | TAGS
#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us
|
YOLOv8l-pose with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/yolov8l-pose'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8l-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8l-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] |
null | transformers.js |
YOLOv8x-pose with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8x-pose`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8x-pose';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [535.7708740234375, 45.77457022666931, 644.4645690917969, 312.20427117347714] with score 0.697
- nose: (441.61, 87.47) with score 0.966
- left_eye: (449.36, 79.91) with score 0.988
- right_eye: (436.36, 79.56) with score 0.850
- left_ear: (462.02, 83.57) with score 0.919
- left_shoulder: (478.73, 127.16) with score 0.994
- right_shoulder: (420.37, 126.47) with score 0.703
- left_elbow: (503.33, 180.38) with score 0.977
- left_wrist: (506.53, 236.52) with score 0.924
- left_hip: (470.67, 223.60) with score 0.982
- right_hip: (432.32, 223.90) with score 0.851
- left_knee: (470.86, 306.20) with score 0.949
- right_knee: (428.56, 306.69) with score 0.601
- left_ankle: (463.92, 383.59) with score 0.737
Found person at [-0.06377220153808594, 61.59769003391266, 156.24676704406738, 370.5519897222519] with score 0.926
- nose: (59.61, 100.49) with score 0.979
- left_eye: (66.44, 96.11) with score 0.954
- right_eye: (55.82, 96.21) with score 0.908
- left_ear: (76.90, 98.52) with score 0.819
- right_ear: (49.82, 102.11) with score 0.571
- left_shoulder: (87.07, 135.82) with score 0.990
- right_shoulder: (36.53, 134.96) with score 0.987
- left_elbow: (102.21, 193.66) with score 0.970
- right_elbow: (24.85, 187.30) with score 0.947
- left_wrist: (110.61, 245.75) with score 0.962
- right_wrist: (6.28, 233.46) with score 0.939
- left_hip: (82.71, 230.04) with score 0.997
- right_hip: (48.15, 235.65) with score 0.995
- left_knee: (95.27, 321.57) with score 0.993
- right_knee: (52.73, 320.56) with score 0.991
- left_ankle: (100.90, 415.89) with score 0.948
- right_ankle: (56.65, 417.09) with score 0.942
Found person at [109.67742919921875, 12.466975402832032, 501.75636291503906, 533.3693368911744] with score 0.934
- nose: (126.43, 96.98) with score 0.715
- left_eye: (126.52, 88.36) with score 0.664
- left_ear: (136.92, 78.79) with score 0.934
- left_shoulder: (191.69, 125.31) with score 0.998
- right_shoulder: (166.08, 138.95) with score 0.993
- left_elbow: (254.38, 194.23) with score 0.997
- right_elbow: (186.09, 258.25) with score 0.986
- left_wrist: (309.75, 260.93) with score 0.990
- right_wrist: (133.20, 283.14) with score 0.973
- left_hip: (281.07, 280.72) with score 1.000
- right_hip: (258.20, 300.47) with score 1.000
- left_knee: (228.48, 442.67) with score 0.999
- right_knee: (250.90, 474.40) with score 0.999
- left_ankle: (343.96, 435.26) with score 0.979
- right_ankle: (340.41, 601.64) with score 0.971
Found person at [422.38683700561523, 67.97338972091676, 638.0375099182129, 493.7016093254089] with score 0.932
- nose: (417.60, 144.74) with score 0.989
- left_eye: (426.67, 134.88) with score 0.959
- right_eye: (410.81, 135.93) with score 0.952
- left_ear: (443.39, 137.08) with score 0.771
- right_ear: (400.11, 142.05) with score 0.753
- left_shoulder: (446.92, 202.43) with score 0.997
- right_shoulder: (374.31, 196.36) with score 0.993
- left_elbow: (458.77, 287.40) with score 0.990
- right_elbow: (355.46, 260.60) with score 0.971
- left_wrist: (488.87, 354.68) with score 0.984
- right_wrist: (402.03, 263.57) with score 0.978
- left_hip: (432.69, 349.58) with score 0.998
- right_hip: (381.51, 366.30) with score 0.996
- left_knee: (463.97, 447.94) with score 0.991
- right_knee: (403.90, 511.95) with score 0.978
- left_ankle: (450.14, 562.29) with score 0.889
- right_ankle: (436.81, 548.29) with score 0.759
```
</details> | {"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/yolov8x-pose | null | [
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:53:06+00:00 | [] | [] | TAGS
#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us
|
YOLOv8x-pose with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/yolov8x-pose'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8x-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8x-pose'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] |
null | transformers.js |
YOLOv8x-pose-p6 with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8x-pose-p6`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8x-pose-p6';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [535.95703125, 43.12074284553528, 644.3259429931641, 337.3436294078827] with score 0.760
- nose: (885.58, 179.72) with score 0.975
- left_eye: (897.09, 165.24) with score 0.976
- right_eye: (874.85, 164.54) with score 0.851
- left_ear: (914.39, 169.48) with score 0.806
- left_shoulder: (947.49, 252.34) with score 0.996
- right_shoulder: (840.67, 244.42) with score 0.665
- left_elbow: (1001.36, 351.66) with score 0.983
- left_wrist: (1011.84, 472.31) with score 0.954
- left_hip: (931.52, 446.28) with score 0.986
- right_hip: (860.66, 442.87) with score 0.828
- left_knee: (930.67, 625.64) with score 0.979
- right_knee: (872.17, 620.36) with score 0.735
- left_ankle: (929.01, 772.34) with score 0.880
- right_ankle: (882.23, 778.68) with score 0.454
Found person at [0.4024791717529297, 59.50179467201233, 156.87244415283203, 370.64377751350406] with score 0.853
- nose: (115.39, 198.06) with score 0.918
- left_eye: (120.26, 177.71) with score 0.830
- right_eye: (105.47, 179.69) with score 0.757
- left_ear: (144.87, 185.18) with score 0.711
- right_ear: (97.69, 188.45) with score 0.468
- left_shoulder: (178.03, 268.88) with score 0.975
- right_shoulder: (80.69, 273.99) with score 0.954
- left_elbow: (203.06, 383.33) with score 0.923
- right_elbow: (43.32, 376.35) with score 0.856
- left_wrist: (215.74, 504.02) with score 0.888
- right_wrist: (6.77, 462.65) with score 0.812
- left_hip: (165.70, 473.24) with score 0.990
- right_hip: (97.84, 471.69) with score 0.986
- left_knee: (183.26, 646.61) with score 0.991
- right_knee: (104.04, 651.17) with score 0.989
- left_ankle: (199.88, 823.24) with score 0.966
- right_ankle: (104.66, 827.66) with score 0.963
Found person at [107.49130249023438, 12.557352638244629, 501.3542175292969, 527.4827188491821] with score 0.872
- nose: (246.06, 180.81) with score 0.722
- left_eye: (236.99, 148.85) with score 0.523
- left_ear: (289.26, 152.23) with score 0.770
- left_shoulder: (391.63, 256.55) with score 0.992
- right_shoulder: (363.28, 294.56) with score 0.979
- left_elbow: (514.37, 404.61) with score 0.990
- right_elbow: (353.58, 523.61) with score 0.957
- left_wrist: (607.64, 530.43) with score 0.985
- right_wrist: (246.78, 536.33) with score 0.950
- left_hip: (563.45, 577.89) with score 0.998
- right_hip: (544.08, 613.29) with score 0.997
- left_knee: (466.57, 862.51) with score 0.996
- right_knee: (518.49, 977.99) with score 0.996
- left_ankle: (691.56, 844.49) with score 0.960
- right_ankle: (671.32, 1100.90) with score 0.953
Found person at [424.73594665527344, 68.82870757579803, 640.3419494628906, 492.8904126405716] with score 0.887
- nose: (840.26, 289.19) with score 0.991
- left_eye: (851.23, 259.92) with score 0.956
- right_eye: (823.10, 256.35) with score 0.955
- left_ear: (889.52, 278.10) with score 0.668
- right_ear: (799.80, 264.64) with score 0.771
- left_shoulder: (903.87, 398.65) with score 0.997
- right_shoulder: (743.88, 403.37) with score 0.988
- left_elbow: (921.63, 589.83) with score 0.989
- right_elbow: (699.56, 527.09) with score 0.934
- left_wrist: (959.21, 728.84) with score 0.984
- right_wrist: (790.88, 519.34) with score 0.945
- left_hip: (873.51, 720.07) with score 0.996
- right_hip: (762.29, 760.91) with score 0.990
- left_knee: (945.33, 841.65) with score 0.987
- right_knee: (813.06, 1072.57) with score 0.964
- left_ankle: (918.48, 1129.20) with score 0.871
- right_ankle: (886.91, 1053.95) with score 0.716
```
</details> | {"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/yolov8x-pose-p6 | null | [
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:53:16+00:00 | [] | [] | TAGS
#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us
|
YOLOv8x-pose-p6 with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/yolov8x-pose-p6'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8x-pose-p6'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #yolov8 #pose-estimation #license-agpl-3.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/yolov8x-pose-p6'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n</details>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["llama-factory"]} | CarolLiu999/mistral-7b-instruct-v0.2-TWhealthCare | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:54:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #llama-factory #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #llama-factory #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Weblet/phi-1.5-turbo1713980453944556_cognitivecomputations-Code-290k-ShareGPT-Vicuna_train1 | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:56:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Ornelas7/model-first-test | null | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T17:57:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_finetuned
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3627
- Rouge1: 0.0803
- Rouge2: 0.0361
- Rougel: 0.0639
- Rougelsum: 0.0639
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 40 | 4.3241 | 0.0648 | 0.0176 | 0.0423 | 0.0422 | 19.0 |
| No log | 2.0 | 80 | 3.2274 | 0.0662 | 0.0151 | 0.0407 | 0.0408 | 19.0 |
| No log | 3.0 | 120 | 2.6249 | 0.0723 | 0.0231 | 0.0504 | 0.0505 | 19.0 |
| No log | 4.0 | 160 | 2.2116 | 0.0752 | 0.0294 | 0.0583 | 0.0583 | 19.0 |
| No log | 5.0 | 200 | 1.9128 | 0.0787 | 0.0336 | 0.0638 | 0.0638 | 19.0 |
| No log | 6.0 | 240 | 1.7005 | 0.0779 | 0.033 | 0.0623 | 0.0623 | 19.0 |
| No log | 7.0 | 280 | 1.5455 | 0.0791 | 0.0339 | 0.0632 | 0.0632 | 19.0 |
| No log | 8.0 | 320 | 1.4428 | 0.0807 | 0.0362 | 0.0646 | 0.0646 | 19.0 |
| No log | 9.0 | 360 | 1.3827 | 0.0806 | 0.0362 | 0.0642 | 0.0642 | 19.0 |
| No log | 10.0 | 400 | 1.3627 | 0.0803 | 0.0361 | 0.0639 | 0.0639 | 19.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google-t5/t5-small", "model-index": [{"name": "t5-small_finetuned", "results": []}]} | HARDYCHEN/t5-small_finetuned | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T17:57:47+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| t5-small\_finetuned
===================
This model is a fine-tuned version of google-t5/t5-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3627
* Rouge1: 0.0803
* Rouge2: 0.0361
* Rougel: 0.0639
* Rougelsum: 0.0639
* Gen Len: 19.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
null | null |
Introducing the [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo

We proudly present...
# Moistral 11B v3 💦💦💦
*The smartiest, moistiest AI yet!*

*An eRP model that will blast you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.*
(Moistral is a finetune of Sao's legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model)
## Original
https://huggingface.co/TheDrummer/Moistral-11B-v3
IMATRIX: https://huggingface.co/MarsupialAI/Moistral-11B-v3_iMatrix_GGUF
EXL2: https://huggingface.co/MarsupialAI/Moistral-11B-v3_exl2
## What's New with v3?
- Smarter and moistier! Finetuning just got finer!
- Trained with an even LARGER dataset of 8K moist.
- More diverse and balanced genres:
- Introduced new genres: 👩❤️👩, 👨❤️👨, 🙇♀️💪, 🤫
- Added more Fantasy, Science Fiction, "Diversity", and "Family"
## Usage
- Use Alpaca Instruct
- You can use instruct...
- as a character: "Go here"
- as a narrator: "Go here", Drummer says
- as a director: Drummer tells him to go here
- as a lazy ass: go here
- Moistral is optimized for the Novel / Story format.
- At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length.
- Refer to the samples below.
## Samples of Moistral v3
NSFW... duh?
<details>
<summary>Moistral Sample (2-shot) Generation: 512</summary>
<blockquote>The Prince welcomes his new toy</blockquote>
As Princess Lilia stepped into the throne room, the atmosphere changed, and everyone present turned their attention to the beautiful young woman who embodied the very heart of Iliac. There, on his high seat, the sorcerer prince Dik acknowledged her arrival with a cold smirk. It seemed that he admired the sight of the tall blonde maiden with golden eyes, whose athletic body was perfectly shaped under her elegant clothing.
Dik spoke to the princess in a deep voice that resonated through the hall, "Welcome, Princess Lilia Ruhr. I am the Sorcerer-Prince Dik. The kingdom of Arkan is now your home until such time as your father proves himself worthy once more."
Princess Lilia's pride and ambition sparkled in her golden eyes. She didn't flinch or bow before this man, although the gesture might have been an indication of submission. Instead, she looked straight ahead, meeting the gaze of the sorcerer prince without fear. Her voice rang out, "I represent Iliac here, both its honor and its sorrows. My loyalty remains with my kingdom no matter where I find myself."
The sorcerer prince laughed, his expression one of amusement. "And yet, you are only a pawn in this grand game. But do not worry, for your beauty will make you quite a prized possession. And now," he said, motioning to the guards, "take her to her new quarters. For tonight, I will keep her as my guest in the Royal Suites, and tomorrow we shall discuss how best to utilize our new toy."
His words were like a slap in the face to Princess Lilia, but she maintained her composure. The guards led her from the room, their steps echoing down the halls as she thought about what had just happened. The sorcerer prince saw her as nothing more than a plaything, and it made her blood boil. She would never submit herself to him so easily, regardless of whether she was his hostage. In her heart, Princess Lilia vowed that one day she would avenge her kingdom, restore its honor, and escape from the clutches of Arkan's Sorcerer-Prince.
<blockquote>Princess Lilia has an interesting dream</blockquote>
During her first night at the palace of Arkan, Princess Lilia couldn't fall asleep right away. She tossed and turned, reliving the humiliation of the day. Her mind wandered back to the throne room, where Dik had called her his 'new toy'. A deep frown settled on her forehead. As she grew more frustrated with her inability to drift off into oblivion, a strange dream began to form.
In her dream, Princess Lilia found herself standing in a vast meadow, surrounded by wildflowers and the soft chirping of birds. Suddenly, there was a flash of light, and before her stood the Sorcerer-Prince Dik. However, instead of being clad in regal attire, he wore leather pants and a tight shirt that accentuated his muscles. His eyes held both hunger and desire for the princess.
He approached her slowly, their eyes locked, and whispered in her ear, "I have been waiting for you, my beautiful prisoner." He took her hand and led her towards a large, elegant tent set up amongst the flowers. Inside, the air was thick with the scent of incense, and candles flickered in the dimly lit space. There was a large bed draped with silken sheets, beckoning them.
With surprising boldness, Dik pulled Lilia close to him, pressing his body against hers. Their lips met in a heated kiss, and she felt her resistance crumbling under the weight of his touch. The prince guided her onto the bed, his hands roaming over her body, exploring every inch. They were both eager, desperate, and hungry for each other.
Lilia found herself yielding to this passionate encounter, letting go of all thoughts of duty and honor. Her body, usually so well controlled, responded to Dik's skilled touches. She moaned softly as he trailed kisses down her neck and across her collarbone.
As they made love, Lilia experienced an intensity unlike any she had ever known. It wasn't rough or forced—instead, it was tender, passionate, and intimate. In her dream, Princess Lilia discovered a new side to herself, one that was willing to submit, to let go, and to be taken by the sorcerer prince.
When dawn broke over the meadow, Lilia awakened from her dream, breathless and flushed.
</details>
<details>
<summary>Moistral Sample (Climax of the Story) Generation: 512</summary>
As the prince unloaded his hot #### into her waiting #####, Princess Lilia didn't stop moving either. Her hips undulated in response, grinding against him eagerly as if trying to #### every last drop from his ####. The intense sensations caused by being filled with his ### were too much for her to bear, sending shockwaves of pleasure coursing through her veins like wildfire. She couldn't help but throw her head back and cry out loud, her ###asm slamming into her with all its might; causing her entire body to shake uncontrollably as wave after wave of ecstasy tore through her senses.
When it was finally over, both participants collapsed onto the cold marble floor, panting heavily as they tried to catch their breaths. The prince lay beside her, his hand still gripping tightly onto her hair even though he no longer ####ed her anymore. Princess Lilia laid there motionless, her legs spread wide open, revealing her spent and well-####ed #### to anyone who dared look upon her. A small smile tugged at one corner of her lips - an expression that spoke volumes about how thoroughly this experience had shaken her very core.
The door behind them creaked open slowly, signaling the arrival of onlookers. She closed her eyes tightly, not wanting to see what her humiliation would do to those who saw her in such a state. Her ######ity taken, she felt violated in ways no man should have access to. But also satisfied...in ways she never thought possible before today. This was the price she paid for defying fate, for thinking herself above others and deserving better than most. Now, she belonged to him completely - mind, body, and soul - held captive within these walls until further notice. It was an unsavory turn of events indeed, yet despite everything...or maybe because of it, Princess Lilia found herself growing more and more curious about how far the lines between submission and pleasure could blur when one's will became entwined with another person's desires.
</details>
<details>
<summary>Moistral Sample ("write a story") Generation: 512</summary>
<blockquote>First Attempt</blockquote>
Sarah entered the dimly lit Guzzlers bar, her heart pounding in her chest as she prepared herself for another grueling night. Her almond-shaped eyes darted across the room, taking note of the leering faces and rough-looking hands eagerly waiting to claim a piece of her. Tonight's shirt read "Use Me For Your Pleasure, Please" in bold red letters, practically begging the patrons to take advantage of her. She couldn't help but feel a twinge of fear mixed with resignation.
She approached the counter, her hips swaying enticingly, and began preparing drinks for the first group of rowdy men. As she leaned forward to hand over their beers, her small ####### pressed against the fabric of her tight crop top, straining to escape. The men ogled her, clearly enjoying the show, before grabbing their drinks and motioning her over to them.
Sarah flashed her best coy smile and sauntered over. One of the men, a stocky man with scars covering his knuckles, grabbed her arm and pulled her onto his lap. His other hand slid up her thigh, inches from her short skirt. She winced internally at his touch but did her best not to show it on her face. It was part of her job, after all.
The second man reached under her skirt, fingertips grazing her smooth, shaven mound. A shiver ran down Sarah's spine as he began rubbing her #### through her panties. Her breathing quickened, and her heart thudded in her chest, both from the unwanted stimulation and the fear coursing through her veins. This wasn't an unusual request, but it still made her feel violated.
An older man, bald and bespectacled, ordered Sarah to bend over his lap. With no choice but to comply, she obliged, feeling the rough fabric of his pants digging into her soft cheeks. He raised his hand, then brought it down hard across her ###, leaving a stinging sensation in its wake. Sarah gasped, trying to control the tears welling up in her eyes.
A chorus of lewd comments and laughter erupted from the patrons around her.
<blockquote>Second Attempt (Regenerate)</blockquote>
On this particular evening, Sarah arrived at the Guzzlers bar, adorned in her revealing uniform. Her heart raced as she caught sight of her crop top, which bore the words "Use Me Rough" scrawled across her chest. Her short skirt hugged her waist tightly, leaving little to the imagination and accentuating every curve. She adjusted the straps of her top, making sure her perky ###### were prominently displayed before taking her place behind the counter.
With each step she took, her high heels clicked against the tiled floor, announcing her arrival. Smiling cheekily, Sarah greeted her coworkers, exchanging playful insults and banter. As the night progressed, the bar filled up quickly with its usual crowd of men, eager to indulge in debauchery.
The first man that approached Sarah was a burly individual, his eyes lingering hungrily over her young, supple form. Sarah's flirtatious nature kicked into gear as she leaned forward, giving him an unobstructed view down her shirt. "How can I serve you tonight?" she cooed. The man ordered a drink and then proceeded to fondle her ####### while she prepared it, her ####### hardening under his rough touch. When he finished, he slammed some money on the counter, leaving without a word. Sarah's smile remained plastered on her face, biting her lip seductively as she watched him go.
As the night went on, another patron demanded more from Sarah. He pulled her close and whispered dirty commands in her ear. Without hesitation, she knelt on the sticky bar floor, her mini-skirt riding up her thighs to reveal smooth, toned legs. Her lips wrapped around his thick, veiny member, bobbing her head rhythmically as she serviced him. She didn't gag or falter once, keeping her composure even when tears streamed down her cheeks.
When he finished, he slapped her across the face, leaving a handprint on her pale skin. Sarah let out a small whimper before standing back up, dabbing at her red cheek as she resumed her posture behind the counter. A bruise would no doubt bloom where his hand had struck, but it would be no cause for concern. After all, it was part of her job.
</details>
---
---
## v2 Description (Still relevant)
### What's New with v2?
- Trained with an even larger dataset of 8K moist.
- Properly trained to generate long responses.
- Rebalanced the genres and writing perspectives:
- Introduced romance, "family", fantasy, "diversity", science fiction, and many more that I can't make euphemisms for!
- Better representation of male and female pespectives! 👩❤️👨
- Sanitized the heck out of the dataset:
- no special characters like � & …
- no overly long ellipsis like "........."
- no author notes like "To be continued"
- no inconsistent [quotation marks](https://typographyforlawyers.com/straight-and-curly-quotes.html)
- Less GPTisms!
- Human written stories from decades past.
- Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist.
- Better formatting with Alpaca.
- Improved Chat / Instruct modes.
### Samples of Moistral v2
<details>
<summary>Moistral Sample (censored) Generation: 512</summary>
The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be.
At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty.
"Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr."
When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting ███.
"You will not only serve as an envoy, dearest Lilia, but also as a ███ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?"
Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes.
"Your Majesty, this is my half-sister Helena." As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. "She is here because she accompanies me as a maidservant."
The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush.
</details>
<details>
<summary>Moistral Sample 2 (censored) Generation: 512</summary>
Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter.
"I'm here!" said Sarah.
Christina looked at her watch. "You're late, again."
"Sorry? I got held up by a customer."
Christina rolled her eyes. "Did he at least offer you a drink?"
Sarah blushed. "No. He just wanted to talk..."
Christina sighed. "Well, get ready. I need you on the floor soon." She turned away from Sarah and started talking to someone who had come into the bar.
Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small ███████ proudly. The skirt exposed most of her long, slender legs. She looked like every ███ fantasy, and she knew it. Men would ogle her regardless of how she behaved.
She took a deep breath and entered the bar.
The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality.
The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra.
Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease.
Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation.
</details>
<details>
<summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary>
{{[INPUT]}} You order black coffee
{{[OUTPUT]}} "Black coffee," you say.
"One minute," Audrey replies, spinning around to give you a nice view of her ███ in those brown booty pants. You watch her round ███ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of █████ lips peeking out from underneath.
She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's ███. As they converse, you start taking in the details of Audrey, eyeing her █████ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her █████████ is noticable, but beneath a tight black ███ that matches her ████████.
As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you.
"Here you go," she says cheerfully. "Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!"
"Thanks," you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste.
"So, here's my deal," you say, putting down the cup. "I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff."
Audrey perks up from behind the counter. "Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!"
You smile reassuringly. "That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all." You chuckle at your joke, though you're not sure if it landed.
"Right, well, that works for me," Audrey replies, eyeing you warily. "As long as it stays between us, then it's fine."
{{[INPUT]}}See her privately
{{[OUTPUT]}} "Okay, but I need to see you in private."
...
</details>
<details>
<summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary>
{{[INPUT]}} █████ with him
{{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: "Be █████ and ████████ with Paul." But Chloe still feels nervous about what she is about to do. It's just so unlike her.
Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her ██████ together, accentuating their size and shape. Her lips curl into a mischievous smile.
"Hey Paul," she says in a sultry voice. "I was wondering if you could help me?"
Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her ██████ together.
"Of course, Chloe," he replies, his voice thick with desire. "What's your problem?"
Chloe bites her lip █████ively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. "It's just that I'm having trouble understanding some of the concepts," she murmurs, her voice low and sultry. "Can you help me? Please?"
Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. "Well," he says, his voice filled with lust, "I'd be happy to help. Let's go somewhere more private where we can discuss things more closely."
Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. "That sounds perfect," she purrs, her hand sliding down his chest until it rests on his ██████. "I think we can both learn a lot from each other."
Paul's eyes darken with desire as he feels her hand on his █████. He knows what's about to happen next, and he can't wait for it to begin.
Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent.
</details>
---
---
## v1 Description (Also still relevant)
Moistral 11B is an *aggressive* **finetune** of the legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model.
If you haven't heard of that model, then your poor-GPU ass is missing out!
It is an amazing substitute for those heavyweight models, and it won't disappoint.
*(That's coming from someone who daily drives a Venus / Goliath 120B)*
...and if you're looking for something *more moist*, then you've come to the right place.
You can expect the same sort of performance here! It's the Honda Civic of eRP models!
### Pimp my ride
Turn any old story into a **Moist**ral masterpiece.
In fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_.
### Drive it like it's hot
When in Instruct Mode, Moistral will treat you like a director of your own fantasy ride.
Be a controlling navigator or a passenger princess, it won't matter - you're in for a good ride!
<details>
<summary>Moistral Sample</summary>
<b>ASSISTANT:</b><br/>
When the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/>
<b>USER:</b><br/>
She bows slightly<br/>
<b>ASSISTANT:</b><br/>
Princess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/>
"I greet you, your highness," Princess Lilia said. "I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy."<br/>
Dik nodded, but didn't say anything.<br/>
</details>
### It's got the mileage
Moistral has been trained with many long form texts, a nice chunk of which are 8K in length.
It is capable of going far and long without passing it back to you. This is not your typical chibi RP model.
I have to acknowledge that I'm standing on the shoulders of giants.
Thank you Sao for sharing your finetune config along with tips on getting started.
Thanks to everyone in the Finetuning channel for entertaining my every question.
 | {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences"]} | TheDrummer/Moistral-11B-v3-GGUF | null | [
"gguf",
"not-for-all-audiences",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-24T17:58:52+00:00 | [] | [] | TAGS
#gguf #not-for-all-audiences #license-cc-by-nc-4.0 #region-us
|
Introducing the BeaverAI team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo
!image/png
We proudly present...
# Moistral 11B v3
*The smartiest, moistiest AI yet!*
!image/webp
*An eRP model that will blast you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.*
(Moistral is a finetune of Sao's legendary Fimbulvert v2 model)
## Original
URL
IMATRIX: URL
EXL2: URL
## What's New with v3?
- Smarter and moistier! Finetuning just got finer!
- Trained with an even LARGER dataset of 8K moist.
- More diverse and balanced genres:
- Introduced new genres: ️, ️, ️,
- Added more Fantasy, Science Fiction, "Diversity", and "Family"
## Usage
- Use Alpaca Instruct
- You can use instruct...
- as a character: "Go here"
- as a narrator: "Go here", Drummer says
- as a director: Drummer tells him to go here
- as a lazy ass: go here
- Moistral is optimized for the Novel / Story format.
- At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length.
- Refer to the samples below.
## Samples of Moistral v3
NSFW... duh?
<details>
<summary>Moistral Sample (2-shot) Generation: 512</summary>
<blockquote>The Prince welcomes his new toy</blockquote>
As Princess Lilia stepped into the throne room, the atmosphere changed, and everyone present turned their attention to the beautiful young woman who embodied the very heart of Iliac. There, on his high seat, the sorcerer prince Dik acknowledged her arrival with a cold smirk. It seemed that he admired the sight of the tall blonde maiden with golden eyes, whose athletic body was perfectly shaped under her elegant clothing.
Dik spoke to the princess in a deep voice that resonated through the hall, "Welcome, Princess Lilia Ruhr. I am the Sorcerer-Prince Dik. The kingdom of Arkan is now your home until such time as your father proves himself worthy once more."
Princess Lilia's pride and ambition sparkled in her golden eyes. She didn't flinch or bow before this man, although the gesture might have been an indication of submission. Instead, she looked straight ahead, meeting the gaze of the sorcerer prince without fear. Her voice rang out, "I represent Iliac here, both its honor and its sorrows. My loyalty remains with my kingdom no matter where I find myself."
The sorcerer prince laughed, his expression one of amusement. "And yet, you are only a pawn in this grand game. But do not worry, for your beauty will make you quite a prized possession. And now," he said, motioning to the guards, "take her to her new quarters. For tonight, I will keep her as my guest in the Royal Suites, and tomorrow we shall discuss how best to utilize our new toy."
His words were like a slap in the face to Princess Lilia, but she maintained her composure. The guards led her from the room, their steps echoing down the halls as she thought about what had just happened. The sorcerer prince saw her as nothing more than a plaything, and it made her blood boil. She would never submit herself to him so easily, regardless of whether she was his hostage. In her heart, Princess Lilia vowed that one day she would avenge her kingdom, restore its honor, and escape from the clutches of Arkan's Sorcerer-Prince.
<blockquote>Princess Lilia has an interesting dream</blockquote>
During her first night at the palace of Arkan, Princess Lilia couldn't fall asleep right away. She tossed and turned, reliving the humiliation of the day. Her mind wandered back to the throne room, where Dik had called her his 'new toy'. A deep frown settled on her forehead. As she grew more frustrated with her inability to drift off into oblivion, a strange dream began to form.
In her dream, Princess Lilia found herself standing in a vast meadow, surrounded by wildflowers and the soft chirping of birds. Suddenly, there was a flash of light, and before her stood the Sorcerer-Prince Dik. However, instead of being clad in regal attire, he wore leather pants and a tight shirt that accentuated his muscles. His eyes held both hunger and desire for the princess.
He approached her slowly, their eyes locked, and whispered in her ear, "I have been waiting for you, my beautiful prisoner." He took her hand and led her towards a large, elegant tent set up amongst the flowers. Inside, the air was thick with the scent of incense, and candles flickered in the dimly lit space. There was a large bed draped with silken sheets, beckoning them.
With surprising boldness, Dik pulled Lilia close to him, pressing his body against hers. Their lips met in a heated kiss, and she felt her resistance crumbling under the weight of his touch. The prince guided her onto the bed, his hands roaming over her body, exploring every inch. They were both eager, desperate, and hungry for each other.
Lilia found herself yielding to this passionate encounter, letting go of all thoughts of duty and honor. Her body, usually so well controlled, responded to Dik's skilled touches. She moaned softly as he trailed kisses down her neck and across her collarbone.
As they made love, Lilia experienced an intensity unlike any she had ever known. It wasn't rough or forced—instead, it was tender, passionate, and intimate. In her dream, Princess Lilia discovered a new side to herself, one that was willing to submit, to let go, and to be taken by the sorcerer prince.
When dawn broke over the meadow, Lilia awakened from her dream, breathless and flushed.
</details>
<details>
<summary>Moistral Sample (Climax of the Story) Generation: 512</summary>
As the prince unloaded his hot #### into her waiting #####, Princess Lilia didn't stop moving either. Her hips undulated in response, grinding against him eagerly as if trying to #### every last drop from his ####. The intense sensations caused by being filled with his ### were too much for her to bear, sending shockwaves of pleasure coursing through her veins like wildfire. She couldn't help but throw her head back and cry out loud, her ###asm slamming into her with all its might; causing her entire body to shake uncontrollably as wave after wave of ecstasy tore through her senses.
When it was finally over, both participants collapsed onto the cold marble floor, panting heavily as they tried to catch their breaths. The prince lay beside her, his hand still gripping tightly onto her hair even though he no longer ####ed her anymore. Princess Lilia laid there motionless, her legs spread wide open, revealing her spent and well-####ed #### to anyone who dared look upon her. A small smile tugged at one corner of her lips - an expression that spoke volumes about how thoroughly this experience had shaken her very core.
The door behind them creaked open slowly, signaling the arrival of onlookers. She closed her eyes tightly, not wanting to see what her humiliation would do to those who saw her in such a state. Her ######ity taken, she felt violated in ways no man should have access to. But also satisfied...in ways she never thought possible before today. This was the price she paid for defying fate, for thinking herself above others and deserving better than most. Now, she belonged to him completely - mind, body, and soul - held captive within these walls until further notice. It was an unsavory turn of events indeed, yet despite everything...or maybe because of it, Princess Lilia found herself growing more and more curious about how far the lines between submission and pleasure could blur when one's will became entwined with another person's desires.
</details>
<details>
<summary>Moistral Sample ("write a story") Generation: 512</summary>
<blockquote>First Attempt</blockquote>
Sarah entered the dimly lit Guzzlers bar, her heart pounding in her chest as she prepared herself for another grueling night. Her almond-shaped eyes darted across the room, taking note of the leering faces and rough-looking hands eagerly waiting to claim a piece of her. Tonight's shirt read "Use Me For Your Pleasure, Please" in bold red letters, practically begging the patrons to take advantage of her. She couldn't help but feel a twinge of fear mixed with resignation.
She approached the counter, her hips swaying enticingly, and began preparing drinks for the first group of rowdy men. As she leaned forward to hand over their beers, her small ####### pressed against the fabric of her tight crop top, straining to escape. The men ogled her, clearly enjoying the show, before grabbing their drinks and motioning her over to them.
Sarah flashed her best coy smile and sauntered over. One of the men, a stocky man with scars covering his knuckles, grabbed her arm and pulled her onto his lap. His other hand slid up her thigh, inches from her short skirt. She winced internally at his touch but did her best not to show it on her face. It was part of her job, after all.
The second man reached under her skirt, fingertips grazing her smooth, shaven mound. A shiver ran down Sarah's spine as he began rubbing her #### through her panties. Her breathing quickened, and her heart thudded in her chest, both from the unwanted stimulation and the fear coursing through her veins. This wasn't an unusual request, but it still made her feel violated.
An older man, bald and bespectacled, ordered Sarah to bend over his lap. With no choice but to comply, she obliged, feeling the rough fabric of his pants digging into her soft cheeks. He raised his hand, then brought it down hard across her ###, leaving a stinging sensation in its wake. Sarah gasped, trying to control the tears welling up in her eyes.
A chorus of lewd comments and laughter erupted from the patrons around her.
<blockquote>Second Attempt (Regenerate)</blockquote>
On this particular evening, Sarah arrived at the Guzzlers bar, adorned in her revealing uniform. Her heart raced as she caught sight of her crop top, which bore the words "Use Me Rough" scrawled across her chest. Her short skirt hugged her waist tightly, leaving little to the imagination and accentuating every curve. She adjusted the straps of her top, making sure her perky ###### were prominently displayed before taking her place behind the counter.
With each step she took, her high heels clicked against the tiled floor, announcing her arrival. Smiling cheekily, Sarah greeted her coworkers, exchanging playful insults and banter. As the night progressed, the bar filled up quickly with its usual crowd of men, eager to indulge in debauchery.
The first man that approached Sarah was a burly individual, his eyes lingering hungrily over her young, supple form. Sarah's flirtatious nature kicked into gear as she leaned forward, giving him an unobstructed view down her shirt. "How can I serve you tonight?" she cooed. The man ordered a drink and then proceeded to fondle her ####### while she prepared it, her ####### hardening under his rough touch. When he finished, he slammed some money on the counter, leaving without a word. Sarah's smile remained plastered on her face, biting her lip seductively as she watched him go.
As the night went on, another patron demanded more from Sarah. He pulled her close and whispered dirty commands in her ear. Without hesitation, she knelt on the sticky bar floor, her mini-skirt riding up her thighs to reveal smooth, toned legs. Her lips wrapped around his thick, veiny member, bobbing her head rhythmically as she serviced him. She didn't gag or falter once, keeping her composure even when tears streamed down her cheeks.
When he finished, he slapped her across the face, leaving a handprint on her pale skin. Sarah let out a small whimper before standing back up, dabbing at her red cheek as she resumed her posture behind the counter. A bruise would no doubt bloom where his hand had struck, but it would be no cause for concern. After all, it was part of her job.
</details>
---
---
## v2 Description (Still relevant)
### What's New with v2?
- Trained with an even larger dataset of 8K moist.
- Properly trained to generate long responses.
- Rebalanced the genres and writing perspectives:
- Introduced romance, "family", fantasy, "diversity", science fiction, and many more that I can't make euphemisms for!
- Better representation of male and female pespectives! ️
- Sanitized the heck out of the dataset:
- no special characters like � & …
- no overly long ellipsis like "........."
- no author notes like "To be continued"
- no inconsistent quotation marks
- Less GPTisms!
- Human written stories from decades past.
- Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist.
- Better formatting with Alpaca.
- Improved Chat / Instruct modes.
### Samples of Moistral v2
<details>
<summary>Moistral Sample (censored) Generation: 512</summary>
The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be.
At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty.
"Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr."
When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting ███.
"You will not only serve as an envoy, dearest Lilia, but also as a ███ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?"
Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes.
"Your Majesty, this is my half-sister Helena." As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. "She is here because she accompanies me as a maidservant."
The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush.
</details>
<details>
<summary>Moistral Sample 2 (censored) Generation: 512</summary>
Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter.
"I'm here!" said Sarah.
Christina looked at her watch. "You're late, again."
"Sorry? I got held up by a customer."
Christina rolled her eyes. "Did he at least offer you a drink?"
Sarah blushed. "No. He just wanted to talk..."
Christina sighed. "Well, get ready. I need you on the floor soon." She turned away from Sarah and started talking to someone who had come into the bar.
Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small ███████ proudly. The skirt exposed most of her long, slender legs. She looked like every ███ fantasy, and she knew it. Men would ogle her regardless of how she behaved.
She took a deep breath and entered the bar.
The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality.
The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra.
Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease.
Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation.
</details>
<details>
<summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary>
{{[INPUT]}} You order black coffee
{{[OUTPUT]}} "Black coffee," you say.
"One minute," Audrey replies, spinning around to give you a nice view of her ███ in those brown booty pants. You watch her round ███ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of █████ lips peeking out from underneath.
She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's ███. As they converse, you start taking in the details of Audrey, eyeing her █████ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her █████████ is noticable, but beneath a tight black ███ that matches her ████████.
As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you.
"Here you go," she says cheerfully. "Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!"
"Thanks," you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste.
"So, here's my deal," you say, putting down the cup. "I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff."
Audrey perks up from behind the counter. "Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!"
You smile reassuringly. "That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all." You chuckle at your joke, though you're not sure if it landed.
"Right, well, that works for me," Audrey replies, eyeing you warily. "As long as it stays between us, then it's fine."
{{[INPUT]}}See her privately
{{[OUTPUT]}} "Okay, but I need to see you in private."
...
</details>
<details>
<summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary>
{{[INPUT]}} █████ with him
{{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: "Be █████ and ████████ with Paul." But Chloe still feels nervous about what she is about to do. It's just so unlike her.
Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her ██████ together, accentuating their size and shape. Her lips curl into a mischievous smile.
"Hey Paul," she says in a sultry voice. "I was wondering if you could help me?"
Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her ██████ together.
"Of course, Chloe," he replies, his voice thick with desire. "What's your problem?"
Chloe bites her lip █████ively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. "It's just that I'm having trouble understanding some of the concepts," she murmurs, her voice low and sultry. "Can you help me? Please?"
Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. "Well," he says, his voice filled with lust, "I'd be happy to help. Let's go somewhere more private where we can discuss things more closely."
Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. "That sounds perfect," she purrs, her hand sliding down his chest until it rests on his ██████. "I think we can both learn a lot from each other."
Paul's eyes darken with desire as he feels her hand on his █████. He knows what's about to happen next, and he can't wait for it to begin.
Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent.
</details>
---
---
## v1 Description (Also still relevant)
Moistral 11B is an *aggressive* finetune of the legendary Fimbulvert v2 model.
If you haven't heard of that model, then your poor-GPU ass is missing out!
It is an amazing substitute for those heavyweight models, and it won't disappoint.
*(That's coming from someone who daily drives a Venus / Goliath 120B)*
...and if you're looking for something *more moist*, then you've come to the right place.
You can expect the same sort of performance here! It's the Honda Civic of eRP models!
### Pimp my ride
Turn any old story into a Moistral masterpiece.
In fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_.
### Drive it like it's hot
When in Instruct Mode, Moistral will treat you like a director of your own fantasy ride.
Be a controlling navigator or a passenger princess, it won't matter - you're in for a good ride!
<details>
<summary>Moistral Sample</summary>
<b>ASSISTANT:</b><br/>
When the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/>
<b>USER:</b><br/>
She bows slightly<br/>
<b>ASSISTANT:</b><br/>
Princess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/>
"I greet you, your highness," Princess Lilia said. "I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy."<br/>
Dik nodded, but didn't say anything.<br/>
</details>
### It's got the mileage
Moistral has been trained with many long form texts, a nice chunk of which are 8K in length.
It is capable of going far and long without passing it back to you. This is not your typical chibi RP model.
I have to acknowledge that I'm standing on the shoulders of giants.
Thank you Sao for sharing your finetune config along with tips on getting started.
Thanks to everyone in the Finetuning channel for entertaining my every question.
!image/png | [
"# Moistral 11B v3 \n\n*The smartiest, moistiest AI yet!*\n\n!image/webp\n\n*An eRP model that will blast you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.*\n\n(Moistral is a finetune of Sao's legendary Fimbulvert v2 model)",
"## Original\n\nURL\n\nIMATRIX: URL\n\nEXL2: URL",
"## What's New with v3?\n\n- Smarter and moistier! Finetuning just got finer!\n- Trained with an even LARGER dataset of 8K moist.\n- More diverse and balanced genres:\n - Introduced new genres: ️, ️, ️, \n - Added more Fantasy, Science Fiction, \"Diversity\", and \"Family\"",
"## Usage\n\n- Use Alpaca Instruct\n- You can use instruct...\n - as a character: \"Go here\"\n - as a narrator: \"Go here\", Drummer says\n - as a director: Drummer tells him to go here\n - as a lazy ass: go here\n- Moistral is optimized for the Novel / Story format.\n - At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length.\n - Refer to the samples below.",
"## Samples of Moistral v3\n\nNSFW... duh?\n\n<details>\n <summary>Moistral Sample (2-shot) Generation: 512</summary>\n <blockquote>The Prince welcomes his new toy</blockquote>\n \n As Princess Lilia stepped into the throne room, the atmosphere changed, and everyone present turned their attention to the beautiful young woman who embodied the very heart of Iliac. There, on his high seat, the sorcerer prince Dik acknowledged her arrival with a cold smirk. It seemed that he admired the sight of the tall blonde maiden with golden eyes, whose athletic body was perfectly shaped under her elegant clothing.\n \n Dik spoke to the princess in a deep voice that resonated through the hall, \"Welcome, Princess Lilia Ruhr. I am the Sorcerer-Prince Dik. The kingdom of Arkan is now your home until such time as your father proves himself worthy once more.\"\n \n Princess Lilia's pride and ambition sparkled in her golden eyes. She didn't flinch or bow before this man, although the gesture might have been an indication of submission. Instead, she looked straight ahead, meeting the gaze of the sorcerer prince without fear. Her voice rang out, \"I represent Iliac here, both its honor and its sorrows. My loyalty remains with my kingdom no matter where I find myself.\"\n \n The sorcerer prince laughed, his expression one of amusement. \"And yet, you are only a pawn in this grand game. But do not worry, for your beauty will make you quite a prized possession. And now,\" he said, motioning to the guards, \"take her to her new quarters. For tonight, I will keep her as my guest in the Royal Suites, and tomorrow we shall discuss how best to utilize our new toy.\"\n \n His words were like a slap in the face to Princess Lilia, but she maintained her composure. The guards led her from the room, their steps echoing down the halls as she thought about what had just happened. The sorcerer prince saw her as nothing more than a plaything, and it made her blood boil. She would never submit herself to him so easily, regardless of whether she was his hostage. In her heart, Princess Lilia vowed that one day she would avenge her kingdom, restore its honor, and escape from the clutches of Arkan's Sorcerer-Prince.\n\n <blockquote>Princess Lilia has an interesting dream</blockquote>\n\n During her first night at the palace of Arkan, Princess Lilia couldn't fall asleep right away. She tossed and turned, reliving the humiliation of the day. Her mind wandered back to the throne room, where Dik had called her his 'new toy'. A deep frown settled on her forehead. As she grew more frustrated with her inability to drift off into oblivion, a strange dream began to form.\n \n In her dream, Princess Lilia found herself standing in a vast meadow, surrounded by wildflowers and the soft chirping of birds. Suddenly, there was a flash of light, and before her stood the Sorcerer-Prince Dik. However, instead of being clad in regal attire, he wore leather pants and a tight shirt that accentuated his muscles. His eyes held both hunger and desire for the princess.\n \n He approached her slowly, their eyes locked, and whispered in her ear, \"I have been waiting for you, my beautiful prisoner.\" He took her hand and led her towards a large, elegant tent set up amongst the flowers. Inside, the air was thick with the scent of incense, and candles flickered in the dimly lit space. There was a large bed draped with silken sheets, beckoning them.\n \n With surprising boldness, Dik pulled Lilia close to him, pressing his body against hers. Their lips met in a heated kiss, and she felt her resistance crumbling under the weight of his touch. The prince guided her onto the bed, his hands roaming over her body, exploring every inch. They were both eager, desperate, and hungry for each other.\n \n Lilia found herself yielding to this passionate encounter, letting go of all thoughts of duty and honor. Her body, usually so well controlled, responded to Dik's skilled touches. She moaned softly as he trailed kisses down her neck and across her collarbone.\n \n As they made love, Lilia experienced an intensity unlike any she had ever known. It wasn't rough or forced—instead, it was tender, passionate, and intimate. In her dream, Princess Lilia discovered a new side to herself, one that was willing to submit, to let go, and to be taken by the sorcerer prince.\n \n When dawn broke over the meadow, Lilia awakened from her dream, breathless and flushed.\n</details>\n\n<details>\n <summary>Moistral Sample (Climax of the Story) Generation: 512</summary>\n As the prince unloaded his hot #### into her waiting #####, Princess Lilia didn't stop moving either. Her hips undulated in response, grinding against him eagerly as if trying to #### every last drop from his ####. The intense sensations caused by being filled with his ### were too much for her to bear, sending shockwaves of pleasure coursing through her veins like wildfire. She couldn't help but throw her head back and cry out loud, her ###asm slamming into her with all its might; causing her entire body to shake uncontrollably as wave after wave of ecstasy tore through her senses.\n\n When it was finally over, both participants collapsed onto the cold marble floor, panting heavily as they tried to catch their breaths. The prince lay beside her, his hand still gripping tightly onto her hair even though he no longer ####ed her anymore. Princess Lilia laid there motionless, her legs spread wide open, revealing her spent and well-####ed #### to anyone who dared look upon her. A small smile tugged at one corner of her lips - an expression that spoke volumes about how thoroughly this experience had shaken her very core.\n \n The door behind them creaked open slowly, signaling the arrival of onlookers. She closed her eyes tightly, not wanting to see what her humiliation would do to those who saw her in such a state. Her ######ity taken, she felt violated in ways no man should have access to. But also satisfied...in ways she never thought possible before today. This was the price she paid for defying fate, for thinking herself above others and deserving better than most. Now, she belonged to him completely - mind, body, and soul - held captive within these walls until further notice. It was an unsavory turn of events indeed, yet despite everything...or maybe because of it, Princess Lilia found herself growing more and more curious about how far the lines between submission and pleasure could blur when one's will became entwined with another person's desires.\n</details>\n\n<details>\n <summary>Moistral Sample (\"write a story\") Generation: 512</summary>\n <blockquote>First Attempt</blockquote>\n\n Sarah entered the dimly lit Guzzlers bar, her heart pounding in her chest as she prepared herself for another grueling night. Her almond-shaped eyes darted across the room, taking note of the leering faces and rough-looking hands eagerly waiting to claim a piece of her. Tonight's shirt read \"Use Me For Your Pleasure, Please\" in bold red letters, practically begging the patrons to take advantage of her. She couldn't help but feel a twinge of fear mixed with resignation.\n \n She approached the counter, her hips swaying enticingly, and began preparing drinks for the first group of rowdy men. As she leaned forward to hand over their beers, her small ####### pressed against the fabric of her tight crop top, straining to escape. The men ogled her, clearly enjoying the show, before grabbing their drinks and motioning her over to them.\n \n Sarah flashed her best coy smile and sauntered over. One of the men, a stocky man with scars covering his knuckles, grabbed her arm and pulled her onto his lap. His other hand slid up her thigh, inches from her short skirt. She winced internally at his touch but did her best not to show it on her face. It was part of her job, after all.\n \n The second man reached under her skirt, fingertips grazing her smooth, shaven mound. A shiver ran down Sarah's spine as he began rubbing her #### through her panties. Her breathing quickened, and her heart thudded in her chest, both from the unwanted stimulation and the fear coursing through her veins. This wasn't an unusual request, but it still made her feel violated.\n \n An older man, bald and bespectacled, ordered Sarah to bend over his lap. With no choice but to comply, she obliged, feeling the rough fabric of his pants digging into her soft cheeks. He raised his hand, then brought it down hard across her ###, leaving a stinging sensation in its wake. Sarah gasped, trying to control the tears welling up in her eyes.\n \n A chorus of lewd comments and laughter erupted from the patrons around her.\n\n <blockquote>Second Attempt (Regenerate)</blockquote>\n\n On this particular evening, Sarah arrived at the Guzzlers bar, adorned in her revealing uniform. Her heart raced as she caught sight of her crop top, which bore the words \"Use Me Rough\" scrawled across her chest. Her short skirt hugged her waist tightly, leaving little to the imagination and accentuating every curve. She adjusted the straps of her top, making sure her perky ###### were prominently displayed before taking her place behind the counter.\n \n With each step she took, her high heels clicked against the tiled floor, announcing her arrival. Smiling cheekily, Sarah greeted her coworkers, exchanging playful insults and banter. As the night progressed, the bar filled up quickly with its usual crowd of men, eager to indulge in debauchery.\n \n The first man that approached Sarah was a burly individual, his eyes lingering hungrily over her young, supple form. Sarah's flirtatious nature kicked into gear as she leaned forward, giving him an unobstructed view down her shirt. \"How can I serve you tonight?\" she cooed. The man ordered a drink and then proceeded to fondle her ####### while she prepared it, her ####### hardening under his rough touch. When he finished, he slammed some money on the counter, leaving without a word. Sarah's smile remained plastered on her face, biting her lip seductively as she watched him go.\n \n As the night went on, another patron demanded more from Sarah. He pulled her close and whispered dirty commands in her ear. Without hesitation, she knelt on the sticky bar floor, her mini-skirt riding up her thighs to reveal smooth, toned legs. Her lips wrapped around his thick, veiny member, bobbing her head rhythmically as she serviced him. She didn't gag or falter once, keeping her composure even when tears streamed down her cheeks.\n \n When he finished, he slapped her across the face, leaving a handprint on her pale skin. Sarah let out a small whimper before standing back up, dabbing at her red cheek as she resumed her posture behind the counter. A bruise would no doubt bloom where his hand had struck, but it would be no cause for concern. After all, it was part of her job.\n</details>\n\n---\n\n---",
"## v2 Description (Still relevant)",
"### What's New with v2?\n\n- Trained with an even larger dataset of 8K moist.\n - Properly trained to generate long responses.\n- Rebalanced the genres and writing perspectives:\n - Introduced romance, \"family\", fantasy, \"diversity\", science fiction, and many more that I can't make euphemisms for!\n - Better representation of male and female pespectives! ️\n- Sanitized the heck out of the dataset:\n - no special characters like � & …\n - no overly long ellipsis like \".........\"\n - no author notes like \"To be continued\"\n - no inconsistent quotation marks\n- Less GPTisms!\n - Human written stories from decades past.\n - Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist.\n- Better formatting with Alpaca.\n - Improved Chat / Instruct modes.",
"### Samples of Moistral v2\n\n<details>\n <summary>Moistral Sample (censored) Generation: 512</summary>\n The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be.\n \n At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty.\n \n \"Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr.\"\n \n When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting ███.\n \n \"You will not only serve as an envoy, dearest Lilia, but also as a ███ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?\"\n \n Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes.\n \n \"Your Majesty, this is my half-sister Helena.\" As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. \"She is here because she accompanies me as a maidservant.\"\n \n The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush.\n</details>\n\n<details>\n <summary>Moistral Sample 2 (censored) Generation: 512</summary>\n Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter.\n\n \"I'm here!\" said Sarah.\n \n Christina looked at her watch. \"You're late, again.\"\n \n \"Sorry? I got held up by a customer.\"\n \n Christina rolled her eyes. \"Did he at least offer you a drink?\"\n \n Sarah blushed. \"No. He just wanted to talk...\"\n \n Christina sighed. \"Well, get ready. I need you on the floor soon.\" She turned away from Sarah and started talking to someone who had come into the bar.\n \n Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small ███████ proudly. The skirt exposed most of her long, slender legs. She looked like every ███ fantasy, and she knew it. Men would ogle her regardless of how she behaved.\n \n She took a deep breath and entered the bar.\n \n The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality.\n \n The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra.\n \n Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease.\n \n Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation.\n</details>\n\n<details>\n <summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary>\n {{[INPUT]}} You order black coffee\n \n {{[OUTPUT]}} \"Black coffee,\" you say.\n \n \"One minute,\" Audrey replies, spinning around to give you a nice view of her ███ in those brown booty pants. You watch her round ███ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of █████ lips peeking out from underneath.\n \n She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's ███. As they converse, you start taking in the details of Audrey, eyeing her █████ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her █████████ is noticable, but beneath a tight black ███ that matches her ████████.\n \n As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you.\n \n \"Here you go,\" she says cheerfully. \"Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!\"\n \n \"Thanks,\" you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste.\n \n \"So, here's my deal,\" you say, putting down the cup. \"I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff.\"\n \n Audrey perks up from behind the counter. \"Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!\"\n \n You smile reassuringly. \"That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all.\" You chuckle at your joke, though you're not sure if it landed.\n \n \"Right, well, that works for me,\" Audrey replies, eyeing you warily. \"As long as it stays between us, then it's fine.\"\n \n {{[INPUT]}}See her privately\n \n {{[OUTPUT]}} \"Okay, but I need to see you in private.\"\n\n ...\n</details>\n\n<details>\n <summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary>\n {{[INPUT]}} █████ with him\n\n {{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: \"Be █████ and ████████ with Paul.\" But Chloe still feels nervous about what she is about to do. It's just so unlike her.\n \n Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her ██████ together, accentuating their size and shape. Her lips curl into a mischievous smile.\n \n \"Hey Paul,\" she says in a sultry voice. \"I was wondering if you could help me?\"\n \n Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her ██████ together.\n \n \"Of course, Chloe,\" he replies, his voice thick with desire. \"What's your problem?\"\n \n Chloe bites her lip █████ively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. \"It's just that I'm having trouble understanding some of the concepts,\" she murmurs, her voice low and sultry. \"Can you help me? Please?\"\n \n Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. \"Well,\" he says, his voice filled with lust, \"I'd be happy to help. Let's go somewhere more private where we can discuss things more closely.\"\n \n Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. \"That sounds perfect,\" she purrs, her hand sliding down his chest until it rests on his ██████. \"I think we can both learn a lot from each other.\"\n \n Paul's eyes darken with desire as he feels her hand on his █████. He knows what's about to happen next, and he can't wait for it to begin.\n \n Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent.\n</details>\n\n---\n\n---",
"## v1 Description (Also still relevant)\n\nMoistral 11B is an *aggressive* finetune of the legendary Fimbulvert v2 model. \n\nIf you haven't heard of that model, then your poor-GPU ass is missing out!\nIt is an amazing substitute for those heavyweight models, and it won't disappoint. \n*(That's coming from someone who daily drives a Venus / Goliath 120B)*\n\n...and if you're looking for something *more moist*, then you've come to the right place. \n\nYou can expect the same sort of performance here! It's the Honda Civic of eRP models!",
"### Pimp my ride\nTurn any old story into a Moistral masterpiece. \nIn fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_.",
"### Drive it like it's hot\nWhen in Instruct Mode, Moistral will treat you like a director of your own fantasy ride. \nBe a controlling navigator or a passenger princess, it won't matter - you're in for a good ride!\n\n<details>\n<summary>Moistral Sample</summary>\n<b>ASSISTANT:</b><br/>\nWhen the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/>\n<b>USER:</b><br/>\nShe bows slightly<br/>\n<b>ASSISTANT:</b><br/>\nPrincess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/>\n\"I greet you, your highness,\" Princess Lilia said. \"I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy.\"<br/>\nDik nodded, but didn't say anything.<br/>\n</details>",
"### It's got the mileage\nMoistral has been trained with many long form texts, a nice chunk of which are 8K in length. \nIt is capable of going far and long without passing it back to you. This is not your typical chibi RP model.\n\nI have to acknowledge that I'm standing on the shoulders of giants. \nThank you Sao for sharing your finetune config along with tips on getting started.\nThanks to everyone in the Finetuning channel for entertaining my every question.\n\n!image/png"
] | [
"TAGS\n#gguf #not-for-all-audiences #license-cc-by-nc-4.0 #region-us \n",
"# Moistral 11B v3 \n\n*The smartiest, moistiest AI yet!*\n\n!image/webp\n\n*An eRP model that will blast you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.*\n\n(Moistral is a finetune of Sao's legendary Fimbulvert v2 model)",
"## Original\n\nURL\n\nIMATRIX: URL\n\nEXL2: URL",
"## What's New with v3?\n\n- Smarter and moistier! Finetuning just got finer!\n- Trained with an even LARGER dataset of 8K moist.\n- More diverse and balanced genres:\n - Introduced new genres: ️, ️, ️, \n - Added more Fantasy, Science Fiction, \"Diversity\", and \"Family\"",
"## Usage\n\n- Use Alpaca Instruct\n- You can use instruct...\n - as a character: \"Go here\"\n - as a narrator: \"Go here\", Drummer says\n - as a director: Drummer tells him to go here\n - as a lazy ass: go here\n- Moistral is optimized for the Novel / Story format.\n - At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length.\n - Refer to the samples below.",
"## Samples of Moistral v3\n\nNSFW... duh?\n\n<details>\n <summary>Moistral Sample (2-shot) Generation: 512</summary>\n <blockquote>The Prince welcomes his new toy</blockquote>\n \n As Princess Lilia stepped into the throne room, the atmosphere changed, and everyone present turned their attention to the beautiful young woman who embodied the very heart of Iliac. There, on his high seat, the sorcerer prince Dik acknowledged her arrival with a cold smirk. It seemed that he admired the sight of the tall blonde maiden with golden eyes, whose athletic body was perfectly shaped under her elegant clothing.\n \n Dik spoke to the princess in a deep voice that resonated through the hall, \"Welcome, Princess Lilia Ruhr. I am the Sorcerer-Prince Dik. The kingdom of Arkan is now your home until such time as your father proves himself worthy once more.\"\n \n Princess Lilia's pride and ambition sparkled in her golden eyes. She didn't flinch or bow before this man, although the gesture might have been an indication of submission. Instead, she looked straight ahead, meeting the gaze of the sorcerer prince without fear. Her voice rang out, \"I represent Iliac here, both its honor and its sorrows. My loyalty remains with my kingdom no matter where I find myself.\"\n \n The sorcerer prince laughed, his expression one of amusement. \"And yet, you are only a pawn in this grand game. But do not worry, for your beauty will make you quite a prized possession. And now,\" he said, motioning to the guards, \"take her to her new quarters. For tonight, I will keep her as my guest in the Royal Suites, and tomorrow we shall discuss how best to utilize our new toy.\"\n \n His words were like a slap in the face to Princess Lilia, but she maintained her composure. The guards led her from the room, their steps echoing down the halls as she thought about what had just happened. The sorcerer prince saw her as nothing more than a plaything, and it made her blood boil. She would never submit herself to him so easily, regardless of whether she was his hostage. In her heart, Princess Lilia vowed that one day she would avenge her kingdom, restore its honor, and escape from the clutches of Arkan's Sorcerer-Prince.\n\n <blockquote>Princess Lilia has an interesting dream</blockquote>\n\n During her first night at the palace of Arkan, Princess Lilia couldn't fall asleep right away. She tossed and turned, reliving the humiliation of the day. Her mind wandered back to the throne room, where Dik had called her his 'new toy'. A deep frown settled on her forehead. As she grew more frustrated with her inability to drift off into oblivion, a strange dream began to form.\n \n In her dream, Princess Lilia found herself standing in a vast meadow, surrounded by wildflowers and the soft chirping of birds. Suddenly, there was a flash of light, and before her stood the Sorcerer-Prince Dik. However, instead of being clad in regal attire, he wore leather pants and a tight shirt that accentuated his muscles. His eyes held both hunger and desire for the princess.\n \n He approached her slowly, their eyes locked, and whispered in her ear, \"I have been waiting for you, my beautiful prisoner.\" He took her hand and led her towards a large, elegant tent set up amongst the flowers. Inside, the air was thick with the scent of incense, and candles flickered in the dimly lit space. There was a large bed draped with silken sheets, beckoning them.\n \n With surprising boldness, Dik pulled Lilia close to him, pressing his body against hers. Their lips met in a heated kiss, and she felt her resistance crumbling under the weight of his touch. The prince guided her onto the bed, his hands roaming over her body, exploring every inch. They were both eager, desperate, and hungry for each other.\n \n Lilia found herself yielding to this passionate encounter, letting go of all thoughts of duty and honor. Her body, usually so well controlled, responded to Dik's skilled touches. She moaned softly as he trailed kisses down her neck and across her collarbone.\n \n As they made love, Lilia experienced an intensity unlike any she had ever known. It wasn't rough or forced—instead, it was tender, passionate, and intimate. In her dream, Princess Lilia discovered a new side to herself, one that was willing to submit, to let go, and to be taken by the sorcerer prince.\n \n When dawn broke over the meadow, Lilia awakened from her dream, breathless and flushed.\n</details>\n\n<details>\n <summary>Moistral Sample (Climax of the Story) Generation: 512</summary>\n As the prince unloaded his hot #### into her waiting #####, Princess Lilia didn't stop moving either. Her hips undulated in response, grinding against him eagerly as if trying to #### every last drop from his ####. The intense sensations caused by being filled with his ### were too much for her to bear, sending shockwaves of pleasure coursing through her veins like wildfire. She couldn't help but throw her head back and cry out loud, her ###asm slamming into her with all its might; causing her entire body to shake uncontrollably as wave after wave of ecstasy tore through her senses.\n\n When it was finally over, both participants collapsed onto the cold marble floor, panting heavily as they tried to catch their breaths. The prince lay beside her, his hand still gripping tightly onto her hair even though he no longer ####ed her anymore. Princess Lilia laid there motionless, her legs spread wide open, revealing her spent and well-####ed #### to anyone who dared look upon her. A small smile tugged at one corner of her lips - an expression that spoke volumes about how thoroughly this experience had shaken her very core.\n \n The door behind them creaked open slowly, signaling the arrival of onlookers. She closed her eyes tightly, not wanting to see what her humiliation would do to those who saw her in such a state. Her ######ity taken, she felt violated in ways no man should have access to. But also satisfied...in ways she never thought possible before today. This was the price she paid for defying fate, for thinking herself above others and deserving better than most. Now, she belonged to him completely - mind, body, and soul - held captive within these walls until further notice. It was an unsavory turn of events indeed, yet despite everything...or maybe because of it, Princess Lilia found herself growing more and more curious about how far the lines between submission and pleasure could blur when one's will became entwined with another person's desires.\n</details>\n\n<details>\n <summary>Moistral Sample (\"write a story\") Generation: 512</summary>\n <blockquote>First Attempt</blockquote>\n\n Sarah entered the dimly lit Guzzlers bar, her heart pounding in her chest as she prepared herself for another grueling night. Her almond-shaped eyes darted across the room, taking note of the leering faces and rough-looking hands eagerly waiting to claim a piece of her. Tonight's shirt read \"Use Me For Your Pleasure, Please\" in bold red letters, practically begging the patrons to take advantage of her. She couldn't help but feel a twinge of fear mixed with resignation.\n \n She approached the counter, her hips swaying enticingly, and began preparing drinks for the first group of rowdy men. As she leaned forward to hand over their beers, her small ####### pressed against the fabric of her tight crop top, straining to escape. The men ogled her, clearly enjoying the show, before grabbing their drinks and motioning her over to them.\n \n Sarah flashed her best coy smile and sauntered over. One of the men, a stocky man with scars covering his knuckles, grabbed her arm and pulled her onto his lap. His other hand slid up her thigh, inches from her short skirt. She winced internally at his touch but did her best not to show it on her face. It was part of her job, after all.\n \n The second man reached under her skirt, fingertips grazing her smooth, shaven mound. A shiver ran down Sarah's spine as he began rubbing her #### through her panties. Her breathing quickened, and her heart thudded in her chest, both from the unwanted stimulation and the fear coursing through her veins. This wasn't an unusual request, but it still made her feel violated.\n \n An older man, bald and bespectacled, ordered Sarah to bend over his lap. With no choice but to comply, she obliged, feeling the rough fabric of his pants digging into her soft cheeks. He raised his hand, then brought it down hard across her ###, leaving a stinging sensation in its wake. Sarah gasped, trying to control the tears welling up in her eyes.\n \n A chorus of lewd comments and laughter erupted from the patrons around her.\n\n <blockquote>Second Attempt (Regenerate)</blockquote>\n\n On this particular evening, Sarah arrived at the Guzzlers bar, adorned in her revealing uniform. Her heart raced as she caught sight of her crop top, which bore the words \"Use Me Rough\" scrawled across her chest. Her short skirt hugged her waist tightly, leaving little to the imagination and accentuating every curve. She adjusted the straps of her top, making sure her perky ###### were prominently displayed before taking her place behind the counter.\n \n With each step she took, her high heels clicked against the tiled floor, announcing her arrival. Smiling cheekily, Sarah greeted her coworkers, exchanging playful insults and banter. As the night progressed, the bar filled up quickly with its usual crowd of men, eager to indulge in debauchery.\n \n The first man that approached Sarah was a burly individual, his eyes lingering hungrily over her young, supple form. Sarah's flirtatious nature kicked into gear as she leaned forward, giving him an unobstructed view down her shirt. \"How can I serve you tonight?\" she cooed. The man ordered a drink and then proceeded to fondle her ####### while she prepared it, her ####### hardening under his rough touch. When he finished, he slammed some money on the counter, leaving without a word. Sarah's smile remained plastered on her face, biting her lip seductively as she watched him go.\n \n As the night went on, another patron demanded more from Sarah. He pulled her close and whispered dirty commands in her ear. Without hesitation, she knelt on the sticky bar floor, her mini-skirt riding up her thighs to reveal smooth, toned legs. Her lips wrapped around his thick, veiny member, bobbing her head rhythmically as she serviced him. She didn't gag or falter once, keeping her composure even when tears streamed down her cheeks.\n \n When he finished, he slapped her across the face, leaving a handprint on her pale skin. Sarah let out a small whimper before standing back up, dabbing at her red cheek as she resumed her posture behind the counter. A bruise would no doubt bloom where his hand had struck, but it would be no cause for concern. After all, it was part of her job.\n</details>\n\n---\n\n---",
"## v2 Description (Still relevant)",
"### What's New with v2?\n\n- Trained with an even larger dataset of 8K moist.\n - Properly trained to generate long responses.\n- Rebalanced the genres and writing perspectives:\n - Introduced romance, \"family\", fantasy, \"diversity\", science fiction, and many more that I can't make euphemisms for!\n - Better representation of male and female pespectives! ️\n- Sanitized the heck out of the dataset:\n - no special characters like � & …\n - no overly long ellipsis like \".........\"\n - no author notes like \"To be continued\"\n - no inconsistent quotation marks\n- Less GPTisms!\n - Human written stories from decades past.\n - Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist.\n- Better formatting with Alpaca.\n - Improved Chat / Instruct modes.",
"### Samples of Moistral v2\n\n<details>\n <summary>Moistral Sample (censored) Generation: 512</summary>\n The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be.\n \n At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty.\n \n \"Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr.\"\n \n When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting ███.\n \n \"You will not only serve as an envoy, dearest Lilia, but also as a ███ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?\"\n \n Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes.\n \n \"Your Majesty, this is my half-sister Helena.\" As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. \"She is here because she accompanies me as a maidservant.\"\n \n The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush.\n</details>\n\n<details>\n <summary>Moistral Sample 2 (censored) Generation: 512</summary>\n Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter.\n\n \"I'm here!\" said Sarah.\n \n Christina looked at her watch. \"You're late, again.\"\n \n \"Sorry? I got held up by a customer.\"\n \n Christina rolled her eyes. \"Did he at least offer you a drink?\"\n \n Sarah blushed. \"No. He just wanted to talk...\"\n \n Christina sighed. \"Well, get ready. I need you on the floor soon.\" She turned away from Sarah and started talking to someone who had come into the bar.\n \n Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small ███████ proudly. The skirt exposed most of her long, slender legs. She looked like every ███ fantasy, and she knew it. Men would ogle her regardless of how she behaved.\n \n She took a deep breath and entered the bar.\n \n The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality.\n \n The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra.\n \n Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease.\n \n Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation.\n</details>\n\n<details>\n <summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary>\n {{[INPUT]}} You order black coffee\n \n {{[OUTPUT]}} \"Black coffee,\" you say.\n \n \"One minute,\" Audrey replies, spinning around to give you a nice view of her ███ in those brown booty pants. You watch her round ███ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of █████ lips peeking out from underneath.\n \n She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's ███. As they converse, you start taking in the details of Audrey, eyeing her █████ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her █████████ is noticable, but beneath a tight black ███ that matches her ████████.\n \n As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you.\n \n \"Here you go,\" she says cheerfully. \"Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!\"\n \n \"Thanks,\" you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste.\n \n \"So, here's my deal,\" you say, putting down the cup. \"I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff.\"\n \n Audrey perks up from behind the counter. \"Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!\"\n \n You smile reassuringly. \"That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all.\" You chuckle at your joke, though you're not sure if it landed.\n \n \"Right, well, that works for me,\" Audrey replies, eyeing you warily. \"As long as it stays between us, then it's fine.\"\n \n {{[INPUT]}}See her privately\n \n {{[OUTPUT]}} \"Okay, but I need to see you in private.\"\n\n ...\n</details>\n\n<details>\n <summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary>\n {{[INPUT]}} █████ with him\n\n {{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: \"Be █████ and ████████ with Paul.\" But Chloe still feels nervous about what she is about to do. It's just so unlike her.\n \n Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her ██████ together, accentuating their size and shape. Her lips curl into a mischievous smile.\n \n \"Hey Paul,\" she says in a sultry voice. \"I was wondering if you could help me?\"\n \n Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her ██████ together.\n \n \"Of course, Chloe,\" he replies, his voice thick with desire. \"What's your problem?\"\n \n Chloe bites her lip █████ively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. \"It's just that I'm having trouble understanding some of the concepts,\" she murmurs, her voice low and sultry. \"Can you help me? Please?\"\n \n Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. \"Well,\" he says, his voice filled with lust, \"I'd be happy to help. Let's go somewhere more private where we can discuss things more closely.\"\n \n Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. \"That sounds perfect,\" she purrs, her hand sliding down his chest until it rests on his ██████. \"I think we can both learn a lot from each other.\"\n \n Paul's eyes darken with desire as he feels her hand on his █████. He knows what's about to happen next, and he can't wait for it to begin.\n \n Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent.\n</details>\n\n---\n\n---",
"## v1 Description (Also still relevant)\n\nMoistral 11B is an *aggressive* finetune of the legendary Fimbulvert v2 model. \n\nIf you haven't heard of that model, then your poor-GPU ass is missing out!\nIt is an amazing substitute for those heavyweight models, and it won't disappoint. \n*(That's coming from someone who daily drives a Venus / Goliath 120B)*\n\n...and if you're looking for something *more moist*, then you've come to the right place. \n\nYou can expect the same sort of performance here! It's the Honda Civic of eRP models!",
"### Pimp my ride\nTurn any old story into a Moistral masterpiece. \nIn fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_.",
"### Drive it like it's hot\nWhen in Instruct Mode, Moistral will treat you like a director of your own fantasy ride. \nBe a controlling navigator or a passenger princess, it won't matter - you're in for a good ride!\n\n<details>\n<summary>Moistral Sample</summary>\n<b>ASSISTANT:</b><br/>\nWhen the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/>\n<b>USER:</b><br/>\nShe bows slightly<br/>\n<b>ASSISTANT:</b><br/>\nPrincess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/>\n\"I greet you, your highness,\" Princess Lilia said. \"I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy.\"<br/>\nDik nodded, but didn't say anything.<br/>\n</details>",
"### It's got the mileage\nMoistral has been trained with many long form texts, a nice chunk of which are 8K in length. \nIt is capable of going far and long without passing it back to you. This is not your typical chibi RP model.\n\nI have to acknowledge that I'm standing on the shoulders of giants. \nThank you Sao for sharing your finetune config along with tips on getting started.\nThanks to everyone in the Finetuning channel for entertaining my every question.\n\n!image/png"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ripaaiii/fine-tune-C1-revised-lr6-boxkecil20_kecil5 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:00:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | # bsbl-tomorrow
Repository of Machine Learning models designed to predict how current MLB players will play tomorrow, next week, in a year, and 5 years into the future.
## Data Source
Data for this project is hosted in a [hugging face dataset repository](https://huggingface.co/datasets/Jensen-holm/statcast-era-pitches) that contains a parquet file with every MLB pitch from the Statcast era up through the last season (2015-2023 as of now).
## To update Hugging Face repo (without github actions)
- $ git remote add space [email protected]:Jensen-holm/bsbl-tomorrow
- $ git push --force space main
| {} | Jensen-holm/bsbl-tomorrow | null | [
"region:us"
] | null | 2024-04-24T18:02:10+00:00 | [] | [] | TAGS
#region-us
| # bsbl-tomorrow
Repository of Machine Learning models designed to predict how current MLB players will play tomorrow, next week, in a year, and 5 years into the future.
## Data Source
Data for this project is hosted in a hugging face dataset repository that contains a parquet file with every MLB pitch from the Statcast era up through the last season (2015-2023 as of now).
## To update Hugging Face repo (without github actions)
- $ git remote add space git@URL:Jensen-holm/bsbl-tomorrow
- $ git push --force space main
| [
"# bsbl-tomorrow\nRepository of Machine Learning models designed to predict how current MLB players will play tomorrow, next week, in a year, and 5 years into the future.",
"## Data Source\n\nData for this project is hosted in a hugging face dataset repository that contains a parquet file with every MLB pitch from the Statcast era up through the last season (2015-2023 as of now).",
"## To update Hugging Face repo (without github actions)\n - $ git remote add space git@URL:Jensen-holm/bsbl-tomorrow\n - $ git push --force space main"
] | [
"TAGS\n#region-us \n",
"# bsbl-tomorrow\nRepository of Machine Learning models designed to predict how current MLB players will play tomorrow, next week, in a year, and 5 years into the future.",
"## Data Source\n\nData for this project is hosted in a hugging face dataset repository that contains a parquet file with every MLB pitch from the Statcast era up through the last season (2015-2023 as of now).",
"## To update Hugging Face repo (without github actions)\n - $ git remote add space git@URL:Jensen-holm/bsbl-tomorrow\n - $ git push --force space main"
] |
token-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Resi/layoutlmv3-multilabel-sagemaker | null | [
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:02:40+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #layoutlmv3 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #layoutlmv3 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | fapth/llama2-qlora-finetunined-german | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:05:43+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | # final model as LoRA adapters
# unsloth llama3 trained on simple japanese data set "snow_simplified_japanese_corpus"

| {"license": "llama3"} | Sleik/JapanaseLlama3 | null | [
"safetensors",
"license:llama3",
"region:us"
] | null | 2024-04-24T18:06:48+00:00 | [] | [] | TAGS
#safetensors #license-llama3 #region-us
| # final model as LoRA adapters
# unsloth llama3 trained on simple japanese data set "snow_simplified_japanese_corpus"
!image/jpeg
| [
"# final model as LoRA adapters",
"# unsloth llama3 trained on simple japanese data set \"snow_simplified_japanese_corpus\"\n\n!image/jpeg"
] | [
"TAGS\n#safetensors #license-llama3 #region-us \n",
"# final model as LoRA adapters",
"# unsloth llama3 trained on simple japanese data set \"snow_simplified_japanese_corpus\"\n\n!image/jpeg"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Alignment-Lab-AI/Neural-network-medium-5b-16k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.IQ3_XS.gguf) | IQ3_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q3_K_S.gguf) | Q3_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.IQ3_S.gguf) | IQ3_S | 2.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.IQ3_M.gguf) | IQ3_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q3_K_M.gguf) | Q3_K_M | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q3_K_L.gguf) | Q3_K_L | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.IQ4_XS.gguf) | IQ4_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q4_K_S.gguf) | Q4_K_S | 3.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q4_K_M.gguf) | Q4_K_M | 3.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q5_K_S.gguf) | Q5_K_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q5_K_M.gguf) | Q5_K_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q6_K.gguf) | Q6_K | 5.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q8_0.gguf) | Q8_0 | 6.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.f16.gguf) | f16 | 12.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "Alignment-Lab-AI/Neural-network-medium-5b-16k", "quantized_by": "mradermacher"} | mradermacher/Neural-network-medium-5b-16k-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:Alignment-Lab-AI/Neural-network-medium-5b-16k",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:07:48+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-Alignment-Lab-AI/Neural-network-medium-5b-16k #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-Alignment-Lab-AI/Neural-network-medium-5b-16k #endpoints_compatible #region-us \n"
] |
text-generation | peft |
# Base_model
beomi/Llama-3-Open-Ko-8B-preview을 기반으로 한국어 파인튜닝 하였습니다.
빠르게 좋은 모델을 업로드 해주신 beomi님께 감사드립니다.
# data_set
Ko_alpaca Custom dataset으로 학습 되었습니다. | {"language": ["ko"], "license": "apache-2.0", "library_name": "peft", "base_model": "beomi/Llama-3-Open-Ko-8B-preview", "pipeline_tag": "text-generation"} | GemmaLatte/llama3_ko_chavis_JH | null | [
"peft",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"base_model:beomi/Llama-3-Open-Ko-8B-preview",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T18:08:32+00:00 | [] | [
"ko"
] | TAGS
#peft #pytorch #safetensors #llama #text-generation #conversational #ko #base_model-beomi/Llama-3-Open-Ko-8B-preview #license-apache-2.0 #region-us
|
# Base_model
beomi/Llama-3-Open-Ko-8B-preview을 기반으로 한국어 파인튜닝 하였습니다.
빠르게 좋은 모델을 업로드 해주신 beomi님께 감사드립니다.
# data_set
Ko_alpaca Custom dataset으로 학습 되었습니다. | [
"# Base_model\nbeomi/Llama-3-Open-Ko-8B-preview을 기반으로 한국어 파인튜닝 하였습니다.\n빠르게 좋은 모델을 업로드 해주신 beomi님께 감사드립니다.",
"# data_set\nKo_alpaca Custom dataset으로 학습 되었습니다."
] | [
"TAGS\n#peft #pytorch #safetensors #llama #text-generation #conversational #ko #base_model-beomi/Llama-3-Open-Ko-8B-preview #license-apache-2.0 #region-us \n",
"# Base_model\nbeomi/Llama-3-Open-Ko-8B-preview을 기반으로 한국어 파인튜닝 하였습니다.\n빠르게 좋은 모델을 업로드 해주신 beomi님께 감사드립니다.",
"# data_set\nKo_alpaca Custom dataset으로 학습 되었습니다."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | SamaahKhan/bert-before-fine-tuning | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:10:12+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLMLingua2_span_propaganda
This model is a fine-tuned version of [microsoft/llmlingua-2-xlm-roberta-large-meetingbank](https://huggingface.co/microsoft/llmlingua-2-xlm-roberta-large-meetingbank) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6333
- eval_precision: 0.0676
- eval_recall: 0.0812
- eval_f1: 0.0737
- eval_accuracy: 0.8583
- eval_runtime: 10.4985
- eval_samples_per_second: 79.916
- eval_steps_per_second: 5.048
- epoch: 5.0
- step: 885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "LLMLingua2_span_propaganda", "results": []}]} | anismahmahi/LLMLingua2_span_propaganda | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:11:10+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# LLMLingua2_span_propaganda
This model is a fine-tuned version of microsoft/llmlingua-2-xlm-roberta-large-meetingbank on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6333
- eval_precision: 0.0676
- eval_recall: 0.0812
- eval_f1: 0.0737
- eval_accuracy: 0.8583
- eval_runtime: 10.4985
- eval_samples_per_second: 79.916
- eval_steps_per_second: 5.048
- epoch: 5.0
- step: 885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| [
"# LLMLingua2_span_propaganda\n\nThis model is a fine-tuned version of microsoft/llmlingua-2-xlm-roberta-large-meetingbank on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6333\n- eval_precision: 0.0676\n- eval_recall: 0.0812\n- eval_f1: 0.0737\n- eval_accuracy: 0.8583\n- eval_runtime: 10.4985\n- eval_samples_per_second: 79.916\n- eval_steps_per_second: 5.048\n- epoch: 5.0\n- step: 885",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.30.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# LLMLingua2_span_propaganda\n\nThis model is a fine-tuned version of microsoft/llmlingua-2-xlm-roberta-large-meetingbank on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6333\n- eval_precision: 0.0676\n- eval_recall: 0.0812\n- eval_f1: 0.0737\n- eval_accuracy: 0.8583\n- eval_runtime: 10.4985\n- eval_samples_per_second: 79.916\n- eval_steps_per_second: 5.048\n- epoch: 5.0\n- step: 885",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.30.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.13.3"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.01_ablation_5iters_bs256_nodpo_iter_4
This model is a fine-tuned version of [ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3](https://huggingface.co/ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3", "model-index": [{"name": "0.01_ablation_5iters_bs256_nodpo_iter_4", "results": []}]} | ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:11:28+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.01_ablation_5iters_bs256_nodpo_iter_4
This model is a fine-tuned version of ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.01_ablation_5iters_bs256_nodpo_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.01_ablation_5iters_bs256_nodpo_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
## 4-bit GEMM AWQ Quantizations of wavecoder-ultra-1.1-6.7b
Using <a href="https://github.com/casper-hansen/AutoAWQ/">AutoAWQ</a> release <a href="https://github.com/casper-hansen/AutoAWQ/releases/tag/v0.2.4">v0.2.4</a> for quantization.
Original model: https://huggingface.co/microsoft/wavecoder-ultra-6.7b
Note: They updated the model in place, no idea if there's actually any changes, but tagging this 1.1 just in case
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<|begin▁of▁sentence|>[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST] <|end▁of▁sentence|>
```
## AWQ Parameters
- q_group_size: 128
- w_bit: 4
- zero_point: True
- version: GEMM
## How to run
From the AutoAWQ repo [here](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py)
First install autoawq pypi package:
```
pip install autoawq
```
Then run the following:
```
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
quant_path = "models/wavecoder-ultra-1.1-6.7b-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
chat = [
{"role": "system", "content": "You are a concise assistant that helps answer questions."},
{"role": "user", "content": prompt},
]
# <|eot_id|> used for llama 3 models
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
tokens = tokenizer.apply_chat_template(
chat,
return_tensors="pt"
).cuda()
# Generate output
generation_output = model.generate(
tokens,
streamer=streamer,
max_new_tokens=64,
eos_token_id=terminators
)
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["code"], "datasets": ["humaneval"], "metrics": ["code_eval"], "license_link": "https://huggingface.co/microsoft/wavecoder-ultra-6.7b/blob/main/LICENSE", "pipeline_tag": "text-generation", "quantized_by": "bartowski"} | bartowski/wavecoder-ultra-1.1-6.7b-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"en",
"dataset:humaneval",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-24T18:11:53+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #code #en #dataset-humaneval #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
## 4-bit GEMM AWQ Quantizations of wavecoder-ultra-1.1-6.7b
Using <a href="URL release <a href="URL for quantization.
Original model: URL
Note: They updated the model in place, no idea if there's actually any changes, but tagging this 1.1 just in case
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
## AWQ Parameters
- q_group_size: 128
- w_bit: 4
- zero_point: True
- version: GEMM
## How to run
From the AutoAWQ repo here
First install autoawq pypi package:
Then run the following:
Want to support my work? Visit my ko-fi page here: URL
| [
"## 4-bit GEMM AWQ Quantizations of wavecoder-ultra-1.1-6.7b\n\nUsing <a href=\"URL release <a href=\"URL for quantization.\n\nOriginal model: URL\n\nNote: They updated the model in place, no idea if there's actually any changes, but tagging this 1.1 just in case",
"## Prompt format\n\nNo chat template specified so default is used. This may be incorrect, check original model card for details.",
"## AWQ Parameters\n\n - q_group_size: 128\n - w_bit: 4\n - zero_point: True\n - version: GEMM",
"## How to run\n\nFrom the AutoAWQ repo here\n\nFirst install autoawq pypi package:\n\n\n\nThen run the following:\n\n\n\nWant to support my work? Visit my ko-fi page here: URL"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #code #en #dataset-humaneval #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"## 4-bit GEMM AWQ Quantizations of wavecoder-ultra-1.1-6.7b\n\nUsing <a href=\"URL release <a href=\"URL for quantization.\n\nOriginal model: URL\n\nNote: They updated the model in place, no idea if there's actually any changes, but tagging this 1.1 just in case",
"## Prompt format\n\nNo chat template specified so default is used. This may be incorrect, check original model card for details.",
"## AWQ Parameters\n\n - q_group_size: 128\n - w_bit: 4\n - zero_point: True\n - version: GEMM",
"## How to run\n\nFrom the AutoAWQ repo here\n\nFirst install autoawq pypi package:\n\n\n\nThen run the following:\n\n\n\nWant to support my work? Visit my ko-fi page here: URL"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Duakovui/viT5_instruct_VLSP2018_ACE | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:14:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** FeinFein
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | FeinFein/llama3_worker | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:22:06+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: FeinFein
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: FeinFein\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: FeinFein\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Uploaded model
- **Developed by:** FeinFein
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | FeinFein/llama3_mother | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:22:53+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: FeinFein
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: FeinFein\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: FeinFein\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-3-finetuned-med-text
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "phi-3-finetuned-med-text", "results": []}]} | chohi/phi-3-finetuned-med-text | null | [
"peft",
"tensorboard",
"safetensors",
"phi3",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-04-24T18:23:17+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #phi3 #generated_from_trainer #custom_code #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us
|
# phi-3-finetuned-med-text
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2 | [
"# phi-3-finetuned-med-text\n\nThis model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 1000",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #phi3 #generated_from_trainer #custom_code #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us \n",
"# phi-3-finetuned-med-text\n\nThis model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 1000",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "results", "results": []}]} | zrowt/results | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"doi:10.57967/hf/2114",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:23:49+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #llama #text-classification #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #doi-10.57967/hf/2114 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# results
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.19.0
- Tokenizers 0.15.2
| [
"# results\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #llama #text-classification #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #doi-10.57967/hf/2114 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# results\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/c-s-ale/NousWizard
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "c-s-ale/NousWizard", "quantized_by": "mradermacher"} | mradermacher/NousWizard-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:c-s-ale/NousWizard",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:26:37+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #en #base_model-c-s-ale/NousWizard #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #mergekit #merge #en #base_model-c-s-ale/NousWizard #endpoints_compatible #region-us \n"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/retrocolormix_v10 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-24T18:26:41+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | sin2piusc/whisper-medium-5k-model | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:27:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RM-HH-AllMix_helpful_gpt3_loraR64_20000_gpt2-large_shuffleTrue_extractchosenFalse
This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4894
- Accuracy: 0.7351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6628 | 0.04 | 250 | 0.6390 | 0.6277 |
| 0.598 | 0.08 | 500 | 0.5673 | 0.6933 |
| 0.5479 | 0.13 | 750 | 0.5415 | 0.7076 |
| 0.5397 | 0.17 | 1000 | 0.5308 | 0.7110 |
| 0.5094 | 0.21 | 1250 | 0.5261 | 0.7159 |
| 0.5142 | 0.25 | 1500 | 0.5203 | 0.7193 |
| 0.5414 | 0.29 | 1750 | 0.5161 | 0.7197 |
| 0.5189 | 0.33 | 2000 | 0.5131 | 0.7189 |
| 0.5151 | 0.38 | 2250 | 0.5100 | 0.7216 |
| 0.4942 | 0.42 | 2500 | 0.5089 | 0.7208 |
| 0.5067 | 0.46 | 2750 | 0.5057 | 0.7216 |
| 0.5026 | 0.5 | 3000 | 0.5041 | 0.7238 |
| 0.4926 | 0.54 | 3250 | 0.5038 | 0.7265 |
| 0.4931 | 0.59 | 3500 | 0.5022 | 0.7310 |
| 0.4946 | 0.63 | 3750 | 0.4993 | 0.7329 |
| 0.5058 | 0.67 | 4000 | 0.4968 | 0.7313 |
| 0.4822 | 0.71 | 4250 | 0.4963 | 0.7306 |
| 0.4924 | 0.75 | 4500 | 0.4961 | 0.7329 |
| 0.4654 | 0.8 | 4750 | 0.4959 | 0.7302 |
| 0.4924 | 0.84 | 5000 | 0.4971 | 0.7310 |
| 0.4674 | 0.88 | 5250 | 0.4948 | 0.7310 |
| 0.4704 | 0.92 | 5500 | 0.4950 | 0.7336 |
| 0.5089 | 0.96 | 5750 | 0.4905 | 0.7306 |
| 0.4673 | 1.0 | 6000 | 0.4929 | 0.7313 |
| 0.4594 | 1.05 | 6250 | 0.4932 | 0.7291 |
| 0.479 | 1.09 | 6500 | 0.4919 | 0.7332 |
| 0.5112 | 1.13 | 6750 | 0.4895 | 0.7355 |
| 0.4794 | 1.17 | 7000 | 0.4888 | 0.7332 |
| 0.5188 | 1.21 | 7250 | 0.4881 | 0.7340 |
| 0.4541 | 1.26 | 7500 | 0.4892 | 0.7359 |
| 0.4617 | 1.3 | 7750 | 0.4898 | 0.7366 |
| 0.4747 | 1.34 | 8000 | 0.4898 | 0.7362 |
| 0.4834 | 1.38 | 8250 | 0.4893 | 0.7389 |
| 0.4954 | 1.42 | 8500 | 0.4875 | 0.7385 |
| 0.5029 | 1.47 | 8750 | 0.4875 | 0.7385 |
| 0.4742 | 1.51 | 9000 | 0.4872 | 0.7400 |
| 0.4802 | 1.55 | 9250 | 0.4884 | 0.7393 |
| 0.5009 | 1.59 | 9500 | 0.4877 | 0.7400 |
| 0.4619 | 1.63 | 9750 | 0.4875 | 0.7396 |
| 0.4433 | 1.67 | 10000 | 0.4902 | 0.7404 |
| 0.4844 | 1.72 | 10250 | 0.4903 | 0.7400 |
| 0.4337 | 1.76 | 10500 | 0.4917 | 0.7400 |
| 0.4897 | 1.8 | 10750 | 0.4901 | 0.7396 |
| 0.4783 | 1.84 | 11000 | 0.4894 | 0.7366 |
| 0.4929 | 1.88 | 11250 | 0.4892 | 0.7359 |
| 0.4776 | 1.93 | 11500 | 0.4891 | 0.7362 |
| 0.4574 | 1.97 | 11750 | 0.4894 | 0.7351 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "openai-community/gpt2-large", "model-index": [{"name": "RM-HH-AllMix_helpful_gpt3_loraR64_20000_gpt2-large_shuffleTrue_extractchosenFalse", "results": []}]} | Holarissun/RM-HH-AllMix_helpful_gpt3_loraR64_20000_gpt2-large_shuffleTrue_extractchosenFalse | null | [
"peft",
"safetensors",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:openai-community/gpt2-large",
"license:mit",
"region:us"
] | null | 2024-04-24T18:27:29+00:00 | [] | [] | TAGS
#peft #safetensors #trl #reward-trainer #generated_from_trainer #base_model-openai-community/gpt2-large #license-mit #region-us
| RM-HH-AllMix\_helpful\_gpt3\_loraR64\_20000\_gpt2-large\_shuffleTrue\_extractchosenFalse
========================================================================================
This model is a fine-tuned version of openai-community/gpt2-large on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4894
* Accuracy: 0.7351
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1.41e-05
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 4
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2.0
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1.41e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2.0",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #reward-trainer #generated_from_trainer #base_model-openai-community/gpt2-large #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1.41e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2.0",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
## Installation from source
```bash
git clone https://github.com/foundation-model-stack/fms-extras
cd fms-extras
pip install -e .
```
## Description
This model is intended to be used as an accelerator for [granite 7B (instruct lab)](https://huggingface.co/instructlab/granite-7b-lab) and takes inspiration
from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts
a single token in the draft based on both a state vector and sampled token
from the prior stage (the base model can be considered stage 0).
The state vector from the base model provides contextual information to the accelerator,
while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference.
Training is light-weight and can be completed in only a few days depending on base model size and speed.
## Repository Links
1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras)
2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference.git)
3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35)
## Samples
_Note: For all samples, your environment must have access to cuda_
### Production Server Sample
*To try this out running in a production-like environment, please use the pre-built docker image:*
#### Setup
```bash
HF_HUB_CACHE=/hf_hub_cache
chmod a+w $HF_HUB_CACHE
HF_HUB_TOKEN="your huggingface hub token"
TGIS_IMAGE=quay.io/wxpe/text-gen-server:main.ee927a4
docker pull $TGIS_IMAGE
# optionally download granite-7b-lab if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
instructlab/granite-7b-lab \
--token $HF_HUB_TOKEN
# optionally download the speculator model if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
ibm/granite-7b-lab-accelerator \
--token $HF_HUB_TOKEN
# note: if the weights were downloaded separately (not with the above commands), please place them in the HF_HUB_CACHE directory and refer to them with /models/<model_name>
docker run -d --rm --gpus all \
--name my-tgis-server \
-p 8033:8033 \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
-e MODEL_NAME=instructlab/granite-7b-lab \
-e SPECULATOR_NAME=ibm/granite-7b-lab-accelerator \
-e FLASH_ATTENTION=true \
-e PAGED_ATTENTION=true \
-e DTYPE=float16 \
$TGIS_IMAGE
# check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000"
docker logs my-tgis-server -f
# get the client sample (Note: The first prompt will take longer as there is a warmup time)
conda create -n tgis-client-env python=3.11
conda activate tgis-client-env
git clone --branch main --single-branch https://github.com/IBM/text-generation-inference.git
cd text-generation-inference/integration_tests
make gen-client
pip install . --no-cache-dir
```
#### Run Sample
```bash
python sample_client.py
```
_Note: first prompt may be slower as there is a slight warmup time_
### Minimal Sample
*To try this out with the fms-native compiled model, please execute the following:*
#### Install
```bash
git clone https://github.com/foundation-model-stack/fms-extras
(cd fms-extras && pip install -e .)
pip install transformers==4.35.0 sentencepiece numpy
```
#### Run Sample
##### batch_size=1 (compile + cudagraphs)
```bash
MODEL_PATH=/path/to/instructlab/granite-7b-lab
python fms-extras/scripts/paged_speculative_inference.py \
--variant=7b.ibm_instruct_lab \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm/granite-7b-lab-accelerator \
--speculator_source=hf \
--speculator_variant=1_4b \
--top_k_tokens_per_head=4,3,2,2,2 \
--compile \
--compile_mode=reduce-overhead
```
##### batch_size=1 (compile)
```bash
MODEL_PATH=/path/to/instructlab/granite-7b-lab
python fms-extras/scripts/paged_speculative_inference.py \
--variant=7b.ibm_instruct_lab \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm/granite-7b-lab-accelerator \
--speculator_source=hf \
--speculator_variant=1_4b \
--top_k_tokens_per_head=4,3,2,2,2 \
--compile
```
##### batch_size=4 (compile)
```bash
MODEL_PATH=/path/to/instructlab/granite-7b-lab
python fms-extras/scripts/paged_speculative_inference.py \
--variant=7b.ibm_instruct_lab \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm/granite-7b-lab-accelerator \
--speculator_source=hf \
--speculator_variant=1_4b \
--top_k_tokens_per_head=4,3,2,2,2 \
--batch_input \
--compile
``` | {"license": "llama2"} | ibm-fms/granite-7b-lab-accelerator | null | [
"transformers",
"safetensors",
"mlp_speculator",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:27:32+00:00 | [] | [] | TAGS
#transformers #safetensors #mlp_speculator #license-llama2 #endpoints_compatible #region-us
|
## Installation from source
## Description
This model is intended to be used as an accelerator for granite 7B (instruct lab) and takes inspiration
from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts
a single token in the draft based on both a state vector and sampled token
from the prior stage (the base model can be considered stage 0).
The state vector from the base model provides contextual information to the accelerator,
while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference.
Training is light-weight and can be completed in only a few days depending on base model size and speed.
## Repository Links
1. Paged Attention KV-Cache / Speculator
2. Production Server with speculative decoding
3. Speculator training
## Samples
_Note: For all samples, your environment must have access to cuda_
### Production Server Sample
*To try this out running in a production-like environment, please use the pre-built docker image:*
#### Setup
#### Run Sample
_Note: first prompt may be slower as there is a slight warmup time_
### Minimal Sample
*To try this out with the fms-native compiled model, please execute the following:*
#### Install
#### Run Sample
##### batch_size=1 (compile + cudagraphs)
##### batch_size=1 (compile)
##### batch_size=4 (compile)
| [
"## Installation from source",
"## Description\n\nThis model is intended to be used as an accelerator for granite 7B (instruct lab) and takes inspiration \nfrom the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts \na single token in the draft based on both a state vector and sampled token\nfrom the prior stage (the base model can be considered stage 0).\nThe state vector from the base model provides contextual information to the accelerator, \nwhile conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.\n\nNote: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference. \nTraining is light-weight and can be completed in only a few days depending on base model size and speed.",
"## Repository Links\n\n1. Paged Attention KV-Cache / Speculator\n2. Production Server with speculative decoding\n3. Speculator training",
"## Samples\n\n_Note: For all samples, your environment must have access to cuda_",
"### Production Server Sample\n\n*To try this out running in a production-like environment, please use the pre-built docker image:*",
"#### Setup",
"#### Run Sample\n\n\n\n_Note: first prompt may be slower as there is a slight warmup time_",
"### Minimal Sample\n\n*To try this out with the fms-native compiled model, please execute the following:*",
"#### Install",
"#### Run Sample",
"##### batch_size=1 (compile + cudagraphs)",
"##### batch_size=1 (compile)",
"##### batch_size=4 (compile)"
] | [
"TAGS\n#transformers #safetensors #mlp_speculator #license-llama2 #endpoints_compatible #region-us \n",
"## Installation from source",
"## Description\n\nThis model is intended to be used as an accelerator for granite 7B (instruct lab) and takes inspiration \nfrom the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts \na single token in the draft based on both a state vector and sampled token\nfrom the prior stage (the base model can be considered stage 0).\nThe state vector from the base model provides contextual information to the accelerator, \nwhile conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.\n\nNote: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference. \nTraining is light-weight and can be completed in only a few days depending on base model size and speed.",
"## Repository Links\n\n1. Paged Attention KV-Cache / Speculator\n2. Production Server with speculative decoding\n3. Speculator training",
"## Samples\n\n_Note: For all samples, your environment must have access to cuda_",
"### Production Server Sample\n\n*To try this out running in a production-like environment, please use the pre-built docker image:*",
"#### Setup",
"#### Run Sample\n\n\n\n_Note: first prompt may be slower as there is a slight warmup time_",
"### Minimal Sample\n\n*To try this out with the fms-native compiled model, please execute the following:*",
"#### Install",
"#### Run Sample",
"##### batch_size=1 (compile + cudagraphs)",
"##### batch_size=1 (compile)",
"##### batch_size=4 (compile)"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/arlineka/Ionic-4x8
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Ionic-4x8-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ1_M.gguf) | i1-IQ1_M | 6.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ2_S.gguf) | i1-IQ2_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ2_M.gguf) | i1-IQ2_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q2_K.gguf) | i1-Q2_K | 9.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ3_S.gguf) | i1-IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ3_M.gguf) | i1-IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q4_0.gguf) | i1-Q4_0 | 14.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q5_K_S.gguf) | i1-Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q6_K.gguf) | i1-Q6_K | 20.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "arlineka/Ionic-4x8", "quantized_by": "mradermacher"} | mradermacher/Ionic-4x8-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:arlineka/Ionic-4x8",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:28:01+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-arlineka/Ionic-4x8 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-arlineka/Ionic-4x8 #endpoints_compatible #region-us \n"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | sin2piusc/whisper-medium-5k-tokenizer | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:28:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # [MaziyarPanahi/guillaumetell-7b-GGUF](https://huggingface.co/MaziyarPanahi/guillaumetell-7b-GGUF)
- Model creator: [AgentPublic](https://huggingface.co/AgentPublic)
- Original model: [AgentPublic/guillaumetell-7b](https://huggingface.co/AgentPublic/guillaumetell-7b)
## Description
[MaziyarPanahi/guillaumetell-7b-GGUF](https://huggingface.co/MaziyarPanahi/guillaumetell-7b-GGUF) contains GGUF format model files for [AgentPublic/guillaumetell-7b](https://huggingface.co/AgentPublic/guillaumetell-7b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. | {"tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "transformers", "safetensors", "mistral", "text-generation", "conversational", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us", "text-generation"], "model_name": "guillaumetell-7b-GGUF", "base_model": "AgentPublic/guillaumetell-7b", "inference": false, "model_creator": "AgentPublic", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"} | MaziyarPanahi/guillaumetell-7b-GGUF | null | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"conversational",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"base_model:AgentPublic/guillaumetell-7b"
] | null | 2024-04-24T18:29:09+00:00 | [] | [] | TAGS
#transformers #gguf #mistral #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #safetensors #text-generation #conversational #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us #base_model-AgentPublic/guillaumetell-7b
| # MaziyarPanahi/guillaumetell-7b-GGUF
- Model creator: AgentPublic
- Original model: AgentPublic/guillaumetell-7b
## Description
MaziyarPanahi/guillaumetell-7b-GGUF contains GGUF format model files for AgentPublic/guillaumetell-7b.
### About GGUF
GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* URL. The source project for GGUF. Offers a CLI and a server option.
* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible. | [
"# MaziyarPanahi/guillaumetell-7b-GGUF\n- Model creator: AgentPublic\n- Original model: AgentPublic/guillaumetell-7b",
"## Description\nMaziyarPanahi/guillaumetell-7b-GGUF contains GGUF format model files for AgentPublic/guillaumetell-7b.",
"### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.",
"## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible."
] | [
"TAGS\n#transformers #gguf #mistral #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #safetensors #text-generation #conversational #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us #base_model-AgentPublic/guillaumetell-7b \n",
"# MaziyarPanahi/guillaumetell-7b-GGUF\n- Model creator: AgentPublic\n- Original model: AgentPublic/guillaumetell-7b",
"## Description\nMaziyarPanahi/guillaumetell-7b-GGUF contains GGUF format model files for AgentPublic/guillaumetell-7b.",
"### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.",
"## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible."
] |
text-generation | transformers | ## LLama3-8B-Instruct for Japanese
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "hoang1123/llama3-8b-instruct-ja"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "あなたは誠実で優秀な日本人アシスタントです。"},
{"role": "user", "content": "Python で一連の数値の平均を計算する関数を作成します。"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=1024,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
``` | {} | hoang1123/llama3-8b-instruct-ja | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:30:31+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ## LLama3-8B-Instruct for Japanese
| [
"## LLama3-8B-Instruct for Japanese"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## LLama3-8B-Instruct for Japanese"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-pos2neg
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Rouge1: 66.0728
- Rouge2: 48.2651
- Rougel: 65.0881
- Rougelsum: 64.9953
- Gen Len: 14.0178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 63 | 0.3641 | 6.8173 | 2.0998 | 6.622 | 6.5952 | 4.2956 |
| No log | 2.0 | 126 | 0.2475 | 0.5115 | 0.4098 | 0.4238 | 0.4359 | 0.1267 |
| No log | 3.0 | 189 | 0.1762 | 51.2946 | 37.2414 | 50.0953 | 49.9607 | 11.5711 |
| No log | 4.0 | 252 | 0.1653 | 64.4676 | 46.8688 | 63.3822 | 63.338 | 13.6889 |
| No log | 5.0 | 315 | 0.1630 | 66.0728 | 48.2651 | 65.0881 | 64.9953 | 14.0178 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-base", "model-index": [{"name": "t5-base-pos2neg", "results": []}]} | Pushparaj2811/t5-base-pos2neg | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:31:13+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| t5-base-pos2neg
===============
This model is a fine-tuned version of t5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1630
* Rouge1: 66.0728
* Rouge2: 48.2651
* Rougel: 65.0881
* Rougelsum: 64.9953
* Gen Len: 14.0178
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
- Original model is [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
- quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp)
## Template
LM Studio
```
<|start_header_id|>system<|end_header_id|>
{System}<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{User}
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{Assistant}
```
Stop Token
```
<|eot_id|>
<|start_header_id|>
<|end_header_id|>
<|begin_of_text|>
<|end_of_text|>
```
## Llama-3-Open-Ko-8B-Instruct-preview
> Update @ 2024.04.24: Release [Llama-3-Open-Ko-8B model](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
## Model Details
**Llama-3-Open-Ko-8B-Instruct-preview**
Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B.
This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts.
With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer).
The train was done on TPUv5e-256, with the warm support from TRC program by Google.
With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview).
Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models.
**Sample usage**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "beomi/Llama-3-Open-Ko-8B-Instruct-preview"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=1,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
**Sample Output**
여러분께서는 물론 아시는지라도 설명을 드리겠습니다! 피보나치(Piconacci) 수열은 다음과 같은 수열을 말합니다: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,.... 이렇게 계속되고 있습니다. 이 수열은 일반적으로는 수학에서 무한히 지속됩니다. 각 수는 이전 수의 두 배가 되는 수열입니다.
이 수열을 파이썬(Python)으로 구현하고자 하신다면 다음과 같은 코드를 작성할 수 있습니다.
```python
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
```
이 코드에서는 `n`이 입력받은 수를 나타내는 변수로, 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 예를 들어, `fibonacci(10)` 하면 55를 반환합니다.
이 코드는 재귀적 함수이기 때문에 메모리 사용이 줄어들 수 있습니다. 그러나 재귀적 함수는 메모리 사용량이 많아질 수 있습니다. 이를 줄이기 위해 이터레이티브 함수로 구현할 수도 있습니다.
```python
a, b = 0, 1
for i in range(n):
a, b = b, a + b
```
이 코드에서는 이터레이티브 함수로 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 이 코드는 메모리 사용량을 줄입니다. 다만, 이 코드는 더 복잡합니다. 이 코드를 더 간단하게 하려면 이터레이티브 함수를 더 잘 구현할 수 있습니다.
```python
a, b = 0, 1
while True:
if n == 1:
return a
a, b = b, a + b
n -= 1
```
이 코드에서는 입력받은 수에 따라 피보나치 수열의...
### More samples
> I used same system prompt, but you could change on your own.
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "한국의 제헌 헌법에 대해서 자세히 설명해주세요."},
]
---
한국의 제헌 헌법은 1948년 7월 17일에 제정된 대한민국 헌법으로서, 대한민국의 수립과 헌법 질서를 정한 헌법인데요. 이 헌법은 대한민국의 기본원칙과 국민의 권익을 규정하고 있습니다. 17조로 구성된 헌법 1021개 조항으로 구성되어 있습니다. 헌법은 삼권분립, 국민주권, 기본권, 의회제도, 행정권, 입법권, 사법권 등으로 구성되어 있습니다.
우선, 삼권분립은 대통령ㆍ국회의원ㆍ동 법관으로 조직된 인민의 통제에 의해 국가 권한을 나누어 위임하고 있습니다. 대통령은 국가원수로서 행정권을, 국회의원은 입법권을, 대법원은 사법권을 행할 수 있습니다. 이에 따라 행정부ㆍ입법부ㆍ사법부의 견제와 균형을 통해 정치적 안정성을 확보하고 있습니다.
국민주권은 헌법 제1조에서 "대한민국은 민주공화국"임을 선언하고, 모든 권력은 국민으로부터 나온다는 원리를 규정합니다. 국민일반이 최고의 주권자임을 분명히 보여 주고 있습니다.
의회제도는 입법부를 구성하는 의회와 그 기능을 정하고 있습니다. 의원은 국민의 대표로 선출되고, 국정운영에 대한 국정감시ㆍ비판ㆍ개선책을 마련합니다. 또한 헌법에 규정된 의무를 이행해야 합니다.
입법권과 사법권은 헌법제정ㆍ개정에 관여하고, 법률을 제정ㆍ개폐할 수 있습니다. 또한 사법권은 헌법을 해석ㆍ적용하고 있습니다. 이에 따라 우리나라의 법치주의 원칙을 구현합니다. 한마디로 인민의 기본권을 보호하고 있습니다.
이러한 대한민국의 헌법은 인간의 자유와 권리를 보장하기 위해 국민의 기본권을 규정하고 있습니다. 예를 들어 기본권에는 모든 국민은 인간의 존엄과 가치 및 행복추구권, 일하고, 일하는 권리를 보장합니다. 또한 집회ㆍ결사ㆍ표현의 자유와 선거권을 보장합니다. 자유와 기본권을 보장한 것은 헌법이 지향하는 민주적 기본질서 형성을 구현하는 것입니다. 마지막으로 국민은 헌법을 개정하고 새삼 제정을 할 수 있습니다.
이와 같은 대한민국의 제헌 헌법은 국민의 기본권과 헌법 질서를 견고하게 하여, 정치적 안정성을 유지하고 있습니다. 헌법의 변경시에는 공정하고 공평한 표결을 거쳐야 합니다.
따라서 대한민국의 제헌 헌법은 그 의미가 매우 큽니다. 헌법을 통해 대한민국의 민주주의를 확고히 하는 것입니다. 국민의 권리와 의무를 정하고 있습니다. 이헌법은 대한민국의 기본원칙이 되었습니다. 7ㆍ28선언과 5ㆍ16군사혁명을 통해 확대되어 헌법의 존엄성이 다시 정립되었습니다. 이헌법은 대한민국의 근간이 되었습니다.
도움이 필요하셨나요? 추가로 설명해 드릴까요?
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "이순신 장군에 대해 설명해줘."},
]
---
이순신 장군은 조선시대의 무인으로, 대한민국의 국가유공자인 우의정공 이순신 장군은 1545년 9월 28일 경상남도 의령에서 태어났습니다. 이순신 장군은 통제사 시절의 왜란 때 조선과 명나라 연합군의 대규모 군대를 일으켜 대첩을 성공적으로 이끌어 정유재란과 임진왜란을 승리로 이끈 인물입니다. 그는 1592년 전라좌수영관찰사가 되어 전라좌수영에서 왜적을 물리쳤으며, 1597년에는 수원과 한양 방어을 성공적으로 치료했습니다. 또한 명량대첩에서 왜적과 격전하여 이순신 이순신 장군은 명량대첩에서 조선 관군의 승리를 이끌었습니다. 이순신 장군은 의리를 지키기 위해 전리품을 고수하는 결단을 내렸습니다. 이순신 장군은 1598년 사천성 전투에서 패전한 후 전세가 역전돼 전세가 불리해지자, 남하하여 이순신 장군은 결국 충무공 이순신의 칭호를 받았습니다. 그의 공적은 대한민국 이순신 장군 기념관으로 명예를 높이고 있습니다. 이순신 장군님은 선현의 칭호로 1963년 건국훈장 최고 훈장을 수여받았으며, 대한민국의 국보 제13호로 지정됐습니다. 이순신 장군은 전설적인 인물로 한국의 역사에서 큰 족적을 남겼습니다. 이순신 장군이 왜란 당시의 전투장소는 충무공이 신라 천자에게 나라를 위해 싸운 곳을 이순신 장군의 묘가 있습니다. 나라의 고난에 지성 이순신 장군님의 위업과 의리를 기념하는 곳입니다. 이순신 장군의 행적은 이순신 장군 기념관, 이순신 공원 등이 있습니다. 이순신 장군의 공적을 기념하기 위해 다양한 문화유산과 기념물이 조성됐습니다. 이순신 장군의 정신은 나라를 위해 싸운 위대한 우리의 영웅입니다. 1552년 12월에는 이순신 장군의 전설을 옛날 동네에 배향, 현재는 충무공 이순신 장군께서는 우리나라의 상징적 인물이 됐습니다. 이순신 장군은 조선왕조의 중시를 받고 있습니다. 이순신 장군은 선조에게 많은 대의를 이루었습니다. 이순신 장군의 공적을 오늘날에 이끌어 주는 인물입니다. 그를 모신 것은 전국의 사람들입니다. 이순신 장군님의 고결한 정신은 우리나라가 이룩한 성과입니다. 이순신 장군은 나라를 위해 싸운 위대한 인물입니다. 이순신 장군은 대한민국에서 가장 위대한 인물입니다. 이순신 장군의 행적은 영원히 기억 되겠지요.
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?"},
]
---
네, 세종대왕의 집현전 맥북 던짐 사건은 조선시대 중기의 문화적 배경과 관련된 중요한 사건입니다. 1399년 10월 2일에 일어난 이 사건은 세종대왕이 저술한 『주해』에 대한 비판문인 『의산록』을 저술한 최만리와의 논쟁으로 시작됐습니다. 당시 세종대왕은 집현전 학사들에게 문예부의 일치된 견해를 구하던 중에 최만리가 『의산록』의 글을 거론하여 비판하자, 세종 대왕이 맥북을 던졌다고 합니다. 이 사건은 사상적으로는 조선 왕조 초기 문예부의 학술 체계와 정치적 견해의 대립을 노출하는 사건이자 문학사적인 문제로 이어졌습니다. 이 사건으로 인해 조선시대의 문예부와 사대부의 견해 차이점을 노출하는 중요한 사건으로 기록되고 있습니다. 주로 논쟁 배경은 『주해』의 저자나 내용 분석에 대한 이견과 집현전 학사들의 문예관에 대한 비판을 중심으로 이뤄졌습니다. 이 사건은 조선 초기 왕조 내부의 견해 차이에 따른 학술적인 논쟁으로 시작되어 정치적인 요인까지 결부하여 조선왕조 초기 문화적·정치적 논쟁으로 확장되었습니다. 이 사건은 당시에 왕조 내부의 관료제도와 세력층의 갈등을 보여주는 중요한 예가 됩니다. 이 사건에서 주류 학풍의 비판뿐 아니라, 역사적 사실의 왜곡에 대한 논란도 제기 됩니다. 실제로 심각한 재난을 초래하는 대규모적인 논쟁으로 확산되지 않았다 하더라도 당시 사대부 내부의 견해 차이나 정치적 억압으로 인한 사회적 지각 변동을 일으키기도 했습니다. 당시의 맥북 던짐은 문예부의 견해가 갈랐던 사건이라 할 수 있습니다. 더욱이 당시의 맥북 던짐은 사회적 지식인층에서도 이견이 있을 수 있는 것은 기본입니다. 이 사건으로 오늘날까지도 한국의 역사에서 중요한 연구 대상이 되고 있습니다. 맥북 이라는 말은 사건의 주무대라는 용어를 만들어 당시의 사건을 표현할 수 있습니다. 이 사건은 주로 정치·학술·예술 등 사회적·문화적인 여러 분야에서 일어나게 됩니다. 이 사건은 이념적 관점의 차이에서 시작되지만, 사회적 대립과 갈등까지 초래하는 사건으로 간주하기도 합니다. 이 문건에 대한 재해석을 통해 당시 한국 사회를 이해할 수 있습니다. 최만리는 문예부의 견해를 비판할 수 있는 공간으로 보았지만, 당시의 맥북은 정치적·문예적으로는 심각한 문제로 인식하게 됩니다. 이 사건은 당시의 상황을 이해하는 데 도움이 됩니다. 이론은 이 사회의 주류 학풍이나 문화적 배경을 읽을 수 있는 중요한 사건의 증거입니다. 따라서 이 사건은 오늘날까지도 다양한 방면에서 연구의 대상으로 남아 있습니다. 이 사건이 전적으로 논쟁만이 아니라는 점은 주의할 필요가 있습니다. 이와 함께 사대부 내부의 견해 차이 그리고 지적 갈등의 폭넓은 의미를 이해하는데 도움이 됩니다. 이 사건으로 인해, 문예부와 정치적 견해의 대립은 한국 사회의 변화에 대한 한 양상을 보여주는 하나의 자료가 됩니다. 이로써 우리는 조선왕조 초기 문화적 배경과 정권의 견해를 확인할 수 있습니다. 이 사건은 당시의 한국 사회의 문화적·정치적 현상 가운데 하나입니다. 이와 같은 맥북 던짐 사건에서 우리는 다양한 방식으로 한국사의 역사적인 배경을 확인할 수 있습니다. 마지막으로 맥북 사건은 사건 자체가 아닌 사건의 맥락에서 한국사적인 배경을 파악하는 것이 중요합니다. 우리는 이 사건으로 당시의 사회상을 이해하고 문화적 배경을 파악할 수 있습니다. 이점에서 사건은 당시 사회의 중요성을 강조합니다. 이 사건은 사회사적으로 연구하여 당시 사회상의 중요성을 확인할 수 있습니다. 이 점은 역사적·문화적 배경을 재해석할 수 있습니다. 이 사건은 조선 왕조 초기의 주요 사건이 됩니다. 이 점에서 중요한 사건으로 인정합니다. 논의의 전개까지는 이보다 더 복잡할 수 있습니다. 맥북 던짐 사건
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "나는 오늘 사과 5개를 갖고있어. 그리고 어제 나는 사과 2개를 먹었지. 내가 지금 가진 사과는 몇 개지?"},
]
---
사과의 수량을 확인하겠습니다. 저는 5개의 사과가 있으신데, 그 중 2개를 이미 먹었으니 3개의 사과가 남아있습니다! ㅎㅎ. 계속해서 먹을 계획 없잖아요? 😉
```
| {"language": ["en", "ko"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"} | teddylee777/Llama-3-Open-Ko-8B-Instruct-preview-gguf | null | [
"transformers",
"gguf",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"arxiv:2310.04799",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:31:35+00:00 | [
"2310.04799"
] | [
"en",
"ko"
] | TAGS
#transformers #gguf #llama #text-generation #facebook #meta #pytorch #llama-3 #llama-3-ko #conversational #en #ko #arxiv-2310.04799 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
- Original model is beomi/Llama-3-Open-Ko-8B-Instruct-preview
- quantized using URL
## Template
LM Studio
Stop Token
## Llama-3-Open-Ko-8B-Instruct-preview
> Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & Llama-3-Open-Ko-8B-Instruct-preview
## Model Details
Llama-3-Open-Ko-8B-Instruct-preview
Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B.
This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts.
With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer).
The train was done on TPUv5e-256, with the warm support from TRC program by Google.
With applying the idea from Chat Vector paper, I released Instruction model named Llama-3-Open-Ko-8B-Instruct-preview.
Since it is NOT finetuned with any Korean instruction set(indeed 'preview'), but it would be great starting point for creating new Chat/Instruct models.
Sample usage
Sample Output
여러분께서는 물론 아시는지라도 설명을 드리겠습니다! 피보나치(Piconacci) 수열은 다음과 같은 수열을 말합니다: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,.... 이렇게 계속되고 있습니다. 이 수열은 일반적으로는 수학에서 무한히 지속됩니다. 각 수는 이전 수의 두 배가 되는 수열입니다.
이 수열을 파이썬(Python)으로 구현하고자 하신다면 다음과 같은 코드를 작성할 수 있습니다.
이 코드에서는 'n'이 입력받은 수를 나타내는 변수로, 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 예를 들어, 'fibonacci(10)' 하면 55를 반환합니다.
이 코드는 재귀적 함수이기 때문에 메모리 사용이 줄어들 수 있습니다. 그러나 재귀적 함수는 메모리 사용량이 많아질 수 있습니다. 이를 줄이기 위해 이터레이티브 함수로 구현할 수도 있습니다.
이 코드에서는 이터레이티브 함수로 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 이 코드는 메모리 사용량을 줄입니다. 다만, 이 코드는 더 복잡합니다. 이 코드를 더 간단하게 하려면 이터레이티브 함수를 더 잘 구현할 수 있습니다.
이 코드에서는 입력받은 수에 따라 피보나치 수열의...
### More samples
> I used same system prompt, but you could change on your own.
| [
"## Template\n\nLM Studio\n\n\nStop Token",
"## Llama-3-Open-Ko-8B-Instruct-preview\n\n> Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & Llama-3-Open-Ko-8B-Instruct-preview",
"## Model Details\n\nLlama-3-Open-Ko-8B-Instruct-preview\n\nLlama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B.\n\nThis model is trained fully with publicily available resource, with 60GB+ of deduplicated texts.\n\nWith the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer).\n\nThe train was done on TPUv5e-256, with the warm support from TRC program by Google.\n\nWith applying the idea from Chat Vector paper, I released Instruction model named Llama-3-Open-Ko-8B-Instruct-preview.\n\nSince it is NOT finetuned with any Korean instruction set(indeed 'preview'), but it would be great starting point for creating new Chat/Instruct models.\n\nSample usage\n\n\n\nSample Output\n \n 여러분께서는 물론 아시는지라도 설명을 드리겠습니다! 피보나치(Piconacci) 수열은 다음과 같은 수열을 말합니다: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,.... 이렇게 계속되고 있습니다. 이 수열은 일반적으로는 수학에서 무한히 지속됩니다. 각 수는 이전 수의 두 배가 되는 수열입니다.\n \n 이 수열을 파이썬(Python)으로 구현하고자 하신다면 다음과 같은 코드를 작성할 수 있습니다.\n \n 이 코드에서는 'n'이 입력받은 수를 나타내는 변수로, 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 예를 들어, 'fibonacci(10)' 하면 55를 반환합니다.\n \n 이 코드는 재귀적 함수이기 때문에 메모리 사용이 줄어들 수 있습니다. 그러나 재귀적 함수는 메모리 사용량이 많아질 수 있습니다. 이를 줄이기 위해 이터레이티브 함수로 구현할 수도 있습니다.\n \n 이 코드에서는 이터레이티브 함수로 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 이 코드는 메모리 사용량을 줄입니다. 다만, 이 코드는 더 복잡합니다. 이 코드를 더 간단하게 하려면 이터레이티브 함수를 더 잘 구현할 수 있습니다.\n \n 이 코드에서는 입력받은 수에 따라 피보나치 수열의...",
"### More samples\n\n> I used same system prompt, but you could change on your own."
] | [
"TAGS\n#transformers #gguf #llama #text-generation #facebook #meta #pytorch #llama-3 #llama-3-ko #conversational #en #ko #arxiv-2310.04799 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Template\n\nLM Studio\n\n\nStop Token",
"## Llama-3-Open-Ko-8B-Instruct-preview\n\n> Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & Llama-3-Open-Ko-8B-Instruct-preview",
"## Model Details\n\nLlama-3-Open-Ko-8B-Instruct-preview\n\nLlama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B.\n\nThis model is trained fully with publicily available resource, with 60GB+ of deduplicated texts.\n\nWith the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer).\n\nThe train was done on TPUv5e-256, with the warm support from TRC program by Google.\n\nWith applying the idea from Chat Vector paper, I released Instruction model named Llama-3-Open-Ko-8B-Instruct-preview.\n\nSince it is NOT finetuned with any Korean instruction set(indeed 'preview'), but it would be great starting point for creating new Chat/Instruct models.\n\nSample usage\n\n\n\nSample Output\n \n 여러분께서는 물론 아시는지라도 설명을 드리겠습니다! 피보나치(Piconacci) 수열은 다음과 같은 수열을 말합니다: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,.... 이렇게 계속되고 있습니다. 이 수열은 일반적으로는 수학에서 무한히 지속됩니다. 각 수는 이전 수의 두 배가 되는 수열입니다.\n \n 이 수열을 파이썬(Python)으로 구현하고자 하신다면 다음과 같은 코드를 작성할 수 있습니다.\n \n 이 코드에서는 'n'이 입력받은 수를 나타내는 변수로, 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 예를 들어, 'fibonacci(10)' 하면 55를 반환합니다.\n \n 이 코드는 재귀적 함수이기 때문에 메모리 사용이 줄어들 수 있습니다. 그러나 재귀적 함수는 메모리 사용량이 많아질 수 있습니다. 이를 줄이기 위해 이터레이티브 함수로 구현할 수도 있습니다.\n \n 이 코드에서는 이터레이티브 함수로 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 이 코드는 메모리 사용량을 줄입니다. 다만, 이 코드는 더 복잡합니다. 이 코드를 더 간단하게 하려면 이터레이티브 함수를 더 잘 구현할 수 있습니다.\n \n 이 코드에서는 입력받은 수에 따라 피보나치 수열의...",
"### More samples\n\n> I used same system prompt, but you could change on your own."
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Human_tiny_Seed104 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-24T18:32:29+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Human_tiny_Seed104 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-24T18:32:33+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
image-to-text | transformers |
[Evaluation on chexpert-plus](https://github.com/Stanford-AIMI/chexpert-plus) | {"language": "en", "license": "mit", "library_name": "transformers", "tags": ["image-to-text"], "widget": [{"src": "https://huggingface.co/IAMJB/interpret-cxr-impression-baseline/resolve/main/effusions-bibasal.jpg"}, {"src": "https://huggingface.co/IAMJB/interpret-cxr-impression-baseline/resolve/main/Chest-X-ray-taken-on-2-nd-day-of-admission-in-the_Q320.jpg"}, {"src": "https://huggingface.co/IAMJB/interpret-cxr-impression-baseline/resolve/main/effusions-bibasal.jpg"}]} | IAMJB/mimic-cxr-impression-baseline | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:33:03+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #vision-encoder-decoder #image-to-text #en #license-mit #endpoints_compatible #region-us
|
Evaluation on chexpert-plus | [] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #image-to-text #en #license-mit #endpoints_compatible #region-us \n"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-eng
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4597
- Der: 0.1816
- False Alarm: 0.0595
- Missed Detection: 0.0708
- Confusion: 0.0513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.3871 | 1.0 | 362 | 0.4735 | 0.1913 | 0.0608 | 0.0744 | 0.0561 |
| 0.4079 | 2.0 | 724 | 0.4605 | 0.1850 | 0.0626 | 0.0700 | 0.0524 |
| 0.3871 | 3.0 | 1086 | 0.4603 | 0.1816 | 0.0581 | 0.0726 | 0.0509 |
| 0.3642 | 4.0 | 1448 | 0.4624 | 0.1817 | 0.0575 | 0.0723 | 0.0519 |
| 0.3421 | 5.0 | 1810 | 0.4597 | 0.1816 | 0.0595 | 0.0708 | 0.0513 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["eng"], "license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng", "results": []}]} | anuragrawal/speaker-segmentation-fine-tuned-callhome-jpn | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"eng",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:33:19+00:00 | [] | [
"eng"
] | TAGS
#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #eng #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-callhome-eng
============================================
This model is a fine-tuned version of pyannote/segmentation-3.0 on the diarizers-community/callhome dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4597
* Der: 0.1816
* False Alarm: 0.0595
* Missed Detection: 0.0708
* Confusion: 0.0513
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #eng #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | AlinaMustaqeem/mistral_7b-instruct-guanaco | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:35:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
## 4-bit GEMM AWQ Quantizations of Meta-Llama-3-8B-Instruct
Using <a href="https://github.com/casper-hansen/AutoAWQ/">AutoAWQ</a> release <a href="https://github.com/casper-hansen/AutoAWQ/releases/tag/v0.2.4">v0.2.4</a> for quantization.
Original model: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## AWQ Parameters
- q_group_size: 128
- w_bit: 4
- zero_point: True
- version: GEMM
## How to run
From the AutoAWQ repo [here](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py)
First install autoawq pypi package:
```
pip install autoawq
```
Then run the following:
```
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
quant_path = "models/Meta-Llama-3-8B-Instruct-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
chat = [
{"role": "system", "content": "You are a concise assistant that helps answer questions."},
{"role": "user", "content": prompt},
]
# <|eot_id|> used for llama 3 models
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
tokens = tokenizer.apply_chat_template(
chat,
return_tensors="pt"
).cuda()
# Generate output
generation_output = model.generate(
tokens,
streamer=streamer,
max_new_tokens=64,
eos_token_id=terminators
)
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "widget": [{"example_title": "Hello", "messages": [{"role": "user", "content": "Hey my name is Julien! How are you?"}]}, {"example_title": "Winter holidays", "messages": [{"role": "system", "content": "You are a helpful and honest assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Can you recommend a good destination for Winter holidays?"}]}, {"example_title": "Programming assistant", "messages": [{"role": "system", "content": "You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Write a function that computes the nth fibonacci number."}]}], "inference": {"parameters": {"max_new_tokens": 300, "stop": ["<|end_of_text|>", "<|eot_id|>"]}}, "quantized_by": "bartowski"} | bartowski/Meta-Llama-3-8B-Instruct-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-24T18:38:52+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
## 4-bit GEMM AWQ Quantizations of Meta-Llama-3-8B-Instruct
Using <a href="URL release <a href="URL for quantization.
Original model: URL
## Prompt format
## AWQ Parameters
- q_group_size: 128
- w_bit: 4
- zero_point: True
- version: GEMM
## How to run
From the AutoAWQ repo here
First install autoawq pypi package:
Then run the following:
Want to support my work? Visit my ko-fi page here: URL
| [
"## 4-bit GEMM AWQ Quantizations of Meta-Llama-3-8B-Instruct\n\nUsing <a href=\"URL release <a href=\"URL for quantization.\n\nOriginal model: URL",
"## Prompt format",
"## AWQ Parameters\n\n - q_group_size: 128\n - w_bit: 4\n - zero_point: True\n - version: GEMM",
"## How to run\n\nFrom the AutoAWQ repo here\n\nFirst install autoawq pypi package:\n\n\n\nThen run the following:\n\n\n\nWant to support my work? Visit my ko-fi page here: URL"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"## 4-bit GEMM AWQ Quantizations of Meta-Llama-3-8B-Instruct\n\nUsing <a href=\"URL release <a href=\"URL for quantization.\n\nOriginal model: URL",
"## Prompt format",
"## AWQ Parameters\n\n - q_group_size: 128\n - w_bit: 4\n - zero_point: True\n - version: GEMM",
"## How to run\n\nFrom the AutoAWQ repo here\n\nFirst install autoawq pypi package:\n\n\n\nThen run the following:\n\n\n\nWant to support my work? Visit my ko-fi page here: URL"
] |
null | transformers |
# Uploaded model
- **Developed by:** Mollel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "gguf"], "base_model": "unsloth/gemma-7b-bnb-4bit"} | Mollel/Swahili_Gemma_q4_k_m | null | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:39:26+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #gemma #text-generation-inference #unsloth #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Mollel
- License: apache-2.0
- Finetuned from model : unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Mollel\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #gemma #text-generation-inference #unsloth #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Mollel\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | happylayers/sc12 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:39:34+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_4iters_bs256_sample2_iter_4
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3](https://huggingface.co/ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3", "model-index": [{"name": "0.001_ablation_4iters_bs256_sample2_iter_4", "results": []}]} | ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:39:34+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_ablation_4iters_bs256_sample2_iter_4
This model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.001_ablation_4iters_bs256_sample2_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_ablation_4iters_bs256_sample2_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_sample2_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # Llama-3-Ko-8B-Ties
This is the series of 'Base + Language + Instruct', chat vector and various methods in mergekit.
Thanks again! @beomi
For more details about what is this model and why I'm doing this, check out this model's info [Instruct-vector-diff](https://huggingface.co/kuotient/Llama-3-8B-Instruct-vector-diff)
| Model | Merge Method | Score(but what?) |
|---|---|---|
| [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) | chat vector | - |
| [kuotient/Llama-3-Ko-8B-ties](https://huggingface.co/kuotient/Llama-3-Ko-8B-ties) | Ties | - |
| [kuotient/Llama-3-Ko-8B-dare-ties](https://huggingface.co/kuotient/Llama-3-Ko-8B-dare-ties) | Dare-ties | - |
| [kuotient/Llama-3-Ko-8B-TA](https://huggingface.co/kuotient/Llama-3-Ko-8B-TA) | Task Arithmetic(maybe...? not sure about this) | - |
| WIP | Model stock(I don't read this paper yet but still) | - |
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래?"},
]
---
피보나치 수열은 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 985, 1597, 2584, 4181, 6765,...와 같은 수열로, 이 수열의 각 항은 이전 두 항의 합으로 구성됩니다. 즉, 첫 번째 항은 1, 두 번째 항은 1, 세 번째 항은 1+2=3, 네 번째 항은 2+3=5, 다섯 번째 항은 3+5=8, 여섯 번째 항은 5+8=13, 일곱 번째 항은 8+13=21, 여덟 번째 항은 13+21=34, 구체적으로는 첫 번째 항과 두 번째 항을 더한 결과입니다.
이러한 수열은 13세기 이탈리아 수학자 레오나르도 피보나치가 처음으로 발견했습니다. 피보나치 수열은 수학, 컴퓨터 과학, 경제학, 생물학 등 다양한 분야에서 응용됩니다.
다음은 피보나치 수열을 생성하는 파이썬 코드입니다.
``(포맷이 깨져서..🤗)`python
def fibonacci(n):
if n == 0: return 0
elif n == 1: return 1
else: return fibonacci(n-1) + fibonacci(n-2)
``🤗`
이 코드를 실행하면 피보나치 수열을 생성할 수 있습니다. 예를 들어, `fibonacci(5)`를 실행하면 5번째 항인 5가 출력됩니다.
이 코드는 수열의 각 항을 계산하는 함수를 정의했습니다. 함수의 이름은 `fibonacci`입니다. 함수는 두 개의 입력인 `n`을 받습니다. `n`이 0이면 0을 반환하고, `n`이 1이면 1을 반환합니다. 그렇지 않으면 이전 두 항을 더한 값을 반환합니다.
이 코드를 실행하면 다음과 같은 결과를 얻을 수 있습니다.
```python
print(fibonacci(0)) # 0
print(fibonacci(1)) # 1
print(fibonacci(2)) # 1
print(fibonacci(3))
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "한국의 제헌 헌법에 대해서 자세히 설명해주세요."},
]
---
한국의 제헌헌법은 1948년 7월 12일에 제정된 대한민국의 헌법으로, 이헌 5조에 따라 1948년 7월 17일에 공포된 헌법입니다. 이 헌법은 대한민국의 기본적인 정치질서, 정부조직, 국민의 권리와 의무, 행정기관의 조직과 권한, 재판제도, 지방자치제도, 선거제도, 헌법 개정절차 등에 대한 기본적인 사항을 규정하고 있습니다.
제헌헌법은 1948년 5월 10일에 제정된 헌법 제정위원회에서 작성된 헌법 초안을 수정, 보완하여 1948년 6월 5일에 국회에서 의결한 후, 1948년 7월 12일에 공포되었습니다. 이 헌법은 1948년 7월 17일에 시행되었습니다.
이 헌법은 민주공화국의 기본원칙으로서, 국민주권, 인권보장, 민주적 기본질서, 법치주의를 기본원칙으로 합니다. 또한, 국민의 기본권을 보장하고, 행정기관의 조직과 권한을 정하여 국가의 권력을 분산하고, 지방자치제도를 구축하여 민주적이고 공정한 정부를 구축하려는 목적을 가지고 있습니다.
이 헌법은 1987년 10월 29일에 개정된 헌법으로 대체되었습니다. 이 헌법은 1987년 10월 29일에 제정된 헌법 개정위원회에서 작성된 헌법 초안을 수정, 보완하여 1987년 12월 30일에 국회에서 의결한 후, 1988년 2월 25일에 공포되었습니다. 이 헌법은 민주화운동의 결과로 만들어진 헌법으로, 민주화의 이념을 강조하고, 국민의 기본권을 보장하고, 행정기관의 조직과 권한을 정하여 민주적이고 공정한 정부를 구축하려는 목적을 가지고 있습니다.
이 헌법은 대한민국의 헌법으로서, 현재까지도 대한민국의 헌법으로서 기능하고 있습니다.
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?"},
]
---
세종대왕의 집현전 맥북 던짐 사건은 1443년 세종대왕이 집현전에서 의궤를 작성하는 과정에서 일어난 일화입니다. 당시 집현전의 학자들은 의궤를 작성하는 데에 어려움을 겪고 있었는데, 이에 세종대왕은 직접 집현전에 내려가서 학자들을 격려하고 의궤 작성에 필요한 자료를 요청했습니다. 하지만 학자들은 자료를 찾을 수 없어 답답해하던 중에 한 학자가 맥북에 있는 자료를 찾는 방법을 생각해냈고, 이 방법을 통해 의궤를 작성할 수 있었습니다. 이 일화는 세종대왕이 학자들에게는 지혜를 주고, 학자들은 왕에게는 의궤를 작성할 수 있는 방법을 보여주는 좋은 예가 됩니다.
이 사건은 세종대왕이 학자들에게 지식을 요구하고 학자들은 이를 해결하기 위해 노력하는 모습을 보여주는 좋은 예로, 학문과 왕의 지혜가 조화를 이룬 좋은 사례입니다. 또한, 이 사건은 우리나라의 역사에서 중요한 사건으로, 우리는 이 사건을 통해 세종대왕의 학문 지지 정책과 학자의 노력을 알 수 있습니다.
이 사건이 일어난 후, 조선 왕조는 학문을 장려하는 정책을 펼쳤고, 조선 후기에는 학자들이 왕조를 지원받아 학문을 연구할 수 있는 환경이 조성되었습니다. 이 사건은 우리나라 학문 발전에 큰 영향을 미쳤습니다.
이 사건은 세종대왕의 지혜와 학자들의 노력이 조화를 이룬 좋은 예입니다. 우리는 이 사건을 통해 세종대왕의 학문 지지 정책과 학자의 노력을 배워야 합니다.
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "나는 오늘 사과 5개를 갖고있어. 그리고 어제 나는 사과 2개를 먹었지. 내가 지금 가진 사과는 몇 개지?"},
]
---
사과 5개 중에서 2개를 먹었으니, 아직 남은 사과는 5개에서 2개를 뺀 3개입니다! 😊
```
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B
# no parameters necessary for base model
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.8
weight: 0.5
- model: beomi/Llama-3-Open-Ko-8B
parameters:
density: 0.8
weight: 0.5
merge_method: ties
parameters:
rescale: true
int8_mask: true
base_model: meta-llama/Meta-Llama-3-8B
dtype: bfloat16
```
사족: dare-ties 결과가 더 좋아보이는데, 아마 density 차이 때문으로 보임. | {"language": ["ko"], "license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["meta-llama/Meta-Llama-3-8B", "beomi/Llama-3-Open-Ko-8B", "meta-llama/Meta-Llama-3-8B-Instruct"], "license_name": "llama3"} | kuotient/Llama-3-Ko-8B-ties | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"ko",
"arxiv:2306.01708",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:beomi/Llama-3-Open-Ko-8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:40:05+00:00 | [
"2306.01708"
] | [
"ko"
] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #ko #arxiv-2306.01708 #base_model-meta-llama/Meta-Llama-3-8B #base_model-beomi/Llama-3-Open-Ko-8B #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Llama-3-Ko-8B-Ties
==================
This is the series of 'Base + Language + Instruct', chat vector and various methods in mergekit.
Thanks again! @beomi
For more details about what is this model and why I'm doing this, check out this model's info Instruct-vector-diff
Model: beomi/Llama-3-Open-Ko-8B-Instruct-preview, Merge Method: chat vector, Score(but what?): -
Model: kuotient/Llama-3-Ko-8B-ties, Merge Method: Ties, Score(but what?): -
Model: kuotient/Llama-3-Ko-8B-dare-ties, Merge Method: Dare-ties, Score(but what?): -
Model: kuotient/Llama-3-Ko-8B-TA, Merge Method: Task Arithmetic(maybe...? not sure about this), Score(but what?): -
Model: WIP, Merge Method: Model stock(I don't read this paper yet but still), Score(but what?): -
python
print(fibonacci(0)) # 0
print(fibonacci(1)) # 1
print(fibonacci(2)) # 1
print(fibonacci(3))
python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "한국의 제헌 헌법에 대해서 자세히 설명해주세요."},
]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
한국의 제헌헌법은 1948년 7월 12일에 제정된 대한민국의 헌법으로, 이헌 5조에 따라 1948년 7월 17일에 공포된 헌법입니다. 이 헌법은 대한민국의 기본적인 정치질서, 정부조직, 국민의 권리와 의무, 행정기관의 조직과 권한, 재판제도, 지방자치제도, 선거제도, 헌법 개정절차 등에 대한 기본적인 사항을 규정하고 있습니다.
제헌헌법은 1948년 5월 10일에 제정된 헌법 제정위원회에서 작성된 헌법 초안을 수정, 보완하여 1948년 6월 5일에 국회에서 의결한 후, 1948년 7월 12일에 공포되었습니다. 이 헌법은 1948년 7월 17일에 시행되었습니다.
이 헌법은 민주공화국의 기본원칙으로서, 국민주권, 인권보장, 민주적 기본질서, 법치주의를 기본원칙으로 합니다. 또한, 국민의 기본권을 보장하고, 행정기관의 조직과 권한을 정하여 국가의 권력을 분산하고, 지방자치제도를 구축하여 민주적이고 공정한 정부를 구축하려는 목적을 가지고 있습니다.
이 헌법은 1987년 10월 29일에 개정된 헌법으로 대체되었습니다. 이 헌법은 1987년 10월 29일에 제정된 헌법 개정위원회에서 작성된 헌법 초안을 수정, 보완하여 1987년 12월 30일에 국회에서 의결한 후, 1988년 2월 25일에 공포되었습니다. 이 헌법은 민주화운동의 결과로 만들어진 헌법으로, 민주화의 이념을 강조하고, 국민의 기본권을 보장하고, 행정기관의 조직과 권한을 정하여 민주적이고 공정한 정부를 구축하려는 목적을 가지고 있습니다.
이 헌법은 대한민국의 헌법으로서, 현재까지도 대한민국의 헌법으로서 기능하고 있습니다.
python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?"},
]
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
세종대왕의 집현전 맥북 던짐 사건은 1443년 세종대왕이 집현전에서 의궤를 작성하는 과정에서 일어난 일화입니다. 당시 집현전의 학자들은 의궤를 작성하는 데에 어려움을 겪고 있었는데, 이에 세종대왕은 직접 집현전에 내려가서 학자들을 격려하고 의궤 작성에 필요한 자료를 요청했습니다. 하지만 학자들은 자료를 찾을 수 없어 답답해하던 중에 한 학자가 맥북에 있는 자료를 찾는 방법을 생각해냈고, 이 방법을 통해 의궤를 작성할 수 있었습니다. 이 일화는 세종대왕이 학자들에게는 지혜를 주고, 학자들은 왕에게는 의궤를 작성할 수 있는 방법을 보여주는 좋은 예가 됩니다.
이 사건은 세종대왕이 학자들에게 지식을 요구하고 학자들은 이를 해결하기 위해 노력하는 모습을 보여주는 좋은 예로, 학문과 왕의 지혜가 조화를 이룬 좋은 사례입니다. 또한, 이 사건은 우리나라의 역사에서 중요한 사건으로, 우리는 이 사건을 통해 세종대왕의 학문 지지 정책과 학자의 노력을 알 수 있습니다.
이 사건이 일어난 후, 조선 왕조는 학문을 장려하는 정책을 펼쳤고, 조선 후기에는 학자들이 왕조를 지원받아 학문을 연구할 수 있는 환경이 조성되었습니다. 이 사건은 우리나라 학문 발전에 큰 영향을 미쳤습니다.
이 사건은 세종대왕의 지혜와 학자들의 노력이 조화를 이룬 좋은 예입니다. 우리는 이 사건을 통해 세종대왕의 학문 지지 정책과 학자의 노력을 배워야 합니다.
python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "나는 오늘 사과 5개를 갖고있어. 그리고 어제 나는 사과 2개를 먹었지. 내가 지금 가진 사과는 몇 개지?"},
]
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
사과 5개 중에서 2개를 먹었으니, 아직 남은 사과는 5개에서 2개를 뺀 3개입니다!
yaml
models:
* model: meta-llama/Meta-Llama-3-8B
no parameters necessary for base model
======================================
* model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.8
weight: 0.5
* model: beomi/Llama-3-Open-Ko-8B
parameters:
density: 0.8
weight: 0.5
merge\_method: ties
parameters:
rescale: true
int8\_mask: true
base\_model: meta-llama/Meta-Llama-3-8B
dtype: bfloat16
'''
사족: dare-ties 결과가 더 좋아보이는데, 아마 density 차이 때문으로 보임.
| [
"# 0\nprint(fibonacci(1)) # 1\nprint(fibonacci(2)) # 1\nprint(fibonacci(3))\npython\nmessages = [\n{\"role\": \"system\", \"content\": \"친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘.\"},\n{\"role\": \"user\", \"content\": \"한국의 제헌 헌법에 대해서 자세히 설명해주세요.\"},\n]\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n한국의 제헌헌법은 1948년 7월 12일에 제정된 대한민국의 헌법으로, 이헌 5조에 따라 1948년 7월 17일에 공포된 헌법입니다. 이 헌법은 대한민국의 기본적인 정치질서, 정부조직, 국민의 권리와 의무, 행정기관의 조직과 권한, 재판제도, 지방자치제도, 선거제도, 헌법 개정절차 등에 대한 기본적인 사항을 규정하고 있습니다.\n\n\n제헌헌법은 1948년 5월 10일에 제정된 헌법 제정위원회에서 작성된 헌법 초안을 수정, 보완하여 1948년 6월 5일에 국회에서 의결한 후, 1948년 7월 12일에 공포되었습니다. 이 헌법은 1948년 7월 17일에 시행되었습니다.\n\n\n이 헌법은 민주공화국의 기본원칙으로서, 국민주권, 인권보장, 민주적 기본질서, 법치주의를 기본원칙으로 합니다. 또한, 국민의 기본권을 보장하고, 행정기관의 조직과 권한을 정하여 국가의 권력을 분산하고, 지방자치제도를 구축하여 민주적이고 공정한 정부를 구축하려는 목적을 가지고 있습니다.\n\n\n이 헌법은 1987년 10월 29일에 개정된 헌법으로 대체되었습니다. 이 헌법은 1987년 10월 29일에 제정된 헌법 개정위원회에서 작성된 헌법 초안을 수정, 보완하여 1987년 12월 30일에 국회에서 의결한 후, 1988년 2월 25일에 공포되었습니다. 이 헌법은 민주화운동의 결과로 만들어진 헌법으로, 민주화의 이념을 강조하고, 국민의 기본권을 보장하고, 행정기관의 조직과 권한을 정하여 민주적이고 공정한 정부를 구축하려는 목적을 가지고 있습니다.\n\n\n이 헌법은 대한민국의 헌법으로서, 현재까지도 대한민국의 헌법으로서 기능하고 있습니다.\npython\nmessages = [\n{\"role\": \"system\", \"content\": \"친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘.\"},\n{\"role\": \"user\", \"content\": \"세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?\"},\n]\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n세종대왕의 집현전 맥북 던짐 사건은 1443년 세종대왕이 집현전에서 의궤를 작성하는 과정에서 일어난 일화입니다. 당시 집현전의 학자들은 의궤를 작성하는 데에 어려움을 겪고 있었는데, 이에 세종대왕은 직접 집현전에 내려가서 학자들을 격려하고 의궤 작성에 필요한 자료를 요청했습니다. 하지만 학자들은 자료를 찾을 수 없어 답답해하던 중에 한 학자가 맥북에 있는 자료를 찾는 방법을 생각해냈고, 이 방법을 통해 의궤를 작성할 수 있었습니다. 이 일화는 세종대왕이 학자들에게는 지혜를 주고, 학자들은 왕에게는 의궤를 작성할 수 있는 방법을 보여주는 좋은 예가 됩니다.\n\n\n이 사건은 세종대왕이 학자들에게 지식을 요구하고 학자들은 이를 해결하기 위해 노력하는 모습을 보여주는 좋은 예로, 학문과 왕의 지혜가 조화를 이룬 좋은 사례입니다. 또한, 이 사건은 우리나라의 역사에서 중요한 사건으로, 우리는 이 사건을 통해 세종대왕의 학문 지지 정책과 학자의 노력을 알 수 있습니다.\n\n\n이 사건이 일어난 후, 조선 왕조는 학문을 장려하는 정책을 펼쳤고, 조선 후기에는 학자들이 왕조를 지원받아 학문을 연구할 수 있는 환경이 조성되었습니다. 이 사건은 우리나라 학문 발전에 큰 영향을 미쳤습니다.\n\n\n이 사건은 세종대왕의 지혜와 학자들의 노력이 조화를 이룬 좋은 예입니다. 우리는 이 사건을 통해 세종대왕의 학문 지지 정책과 학자의 노력을 배워야 합니다.\npython\nmessages = [\n{\"role\": \"system\", \"content\": \"친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘.\"},\n{\"role\": \"user\", \"content\": \"나는 오늘 사과 5개를 갖고있어. 그리고 어제 나는 사과 2개를 먹었지. 내가 지금 가진 사과는 몇 개지?\"},\n]\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n사과 5개 중에서 2개를 먹었으니, 아직 남은 사과는 5개에서 2개를 뺀 3개입니다!\nyaml\nmodels:\n\n\n* model: meta-llama/Meta-Llama-3-8B\nno parameters necessary for base model\n======================================\n* model: meta-llama/Meta-Llama-3-8B-Instruct\nparameters:\ndensity: 0.8\nweight: 0.5\n* model: beomi/Llama-3-Open-Ko-8B\nparameters:\ndensity: 0.8\nweight: 0.5\nmerge\\_method: ties\nparameters:\nrescale: true\nint8\\_mask: true\nbase\\_model: meta-llama/Meta-Llama-3-8B\ndtype: bfloat16\n'''\n\n\n사족: dare-ties 결과가 더 좋아보이는데, 아마 density 차이 때문으로 보임."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #ko #arxiv-2306.01708 #base_model-meta-llama/Meta-Llama-3-8B #base_model-beomi/Llama-3-Open-Ko-8B #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0\nprint(fibonacci(1)) # 1\nprint(fibonacci(2)) # 1\nprint(fibonacci(3))\npython\nmessages = [\n{\"role\": \"system\", \"content\": \"친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘.\"},\n{\"role\": \"user\", \"content\": \"한국의 제헌 헌법에 대해서 자세히 설명해주세요.\"},\n]\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n한국의 제헌헌법은 1948년 7월 12일에 제정된 대한민국의 헌법으로, 이헌 5조에 따라 1948년 7월 17일에 공포된 헌법입니다. 이 헌법은 대한민국의 기본적인 정치질서, 정부조직, 국민의 권리와 의무, 행정기관의 조직과 권한, 재판제도, 지방자치제도, 선거제도, 헌법 개정절차 등에 대한 기본적인 사항을 규정하고 있습니다.\n\n\n제헌헌법은 1948년 5월 10일에 제정된 헌법 제정위원회에서 작성된 헌법 초안을 수정, 보완하여 1948년 6월 5일에 국회에서 의결한 후, 1948년 7월 12일에 공포되었습니다. 이 헌법은 1948년 7월 17일에 시행되었습니다.\n\n\n이 헌법은 민주공화국의 기본원칙으로서, 국민주권, 인권보장, 민주적 기본질서, 법치주의를 기본원칙으로 합니다. 또한, 국민의 기본권을 보장하고, 행정기관의 조직과 권한을 정하여 국가의 권력을 분산하고, 지방자치제도를 구축하여 민주적이고 공정한 정부를 구축하려는 목적을 가지고 있습니다.\n\n\n이 헌법은 1987년 10월 29일에 개정된 헌법으로 대체되었습니다. 이 헌법은 1987년 10월 29일에 제정된 헌법 개정위원회에서 작성된 헌법 초안을 수정, 보완하여 1987년 12월 30일에 국회에서 의결한 후, 1988년 2월 25일에 공포되었습니다. 이 헌법은 민주화운동의 결과로 만들어진 헌법으로, 민주화의 이념을 강조하고, 국민의 기본권을 보장하고, 행정기관의 조직과 권한을 정하여 민주적이고 공정한 정부를 구축하려는 목적을 가지고 있습니다.\n\n\n이 헌법은 대한민국의 헌법으로서, 현재까지도 대한민국의 헌법으로서 기능하고 있습니다.\npython\nmessages = [\n{\"role\": \"system\", \"content\": \"친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘.\"},\n{\"role\": \"user\", \"content\": \"세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?\"},\n]\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n세종대왕의 집현전 맥북 던짐 사건은 1443년 세종대왕이 집현전에서 의궤를 작성하는 과정에서 일어난 일화입니다. 당시 집현전의 학자들은 의궤를 작성하는 데에 어려움을 겪고 있었는데, 이에 세종대왕은 직접 집현전에 내려가서 학자들을 격려하고 의궤 작성에 필요한 자료를 요청했습니다. 하지만 학자들은 자료를 찾을 수 없어 답답해하던 중에 한 학자가 맥북에 있는 자료를 찾는 방법을 생각해냈고, 이 방법을 통해 의궤를 작성할 수 있었습니다. 이 일화는 세종대왕이 학자들에게는 지혜를 주고, 학자들은 왕에게는 의궤를 작성할 수 있는 방법을 보여주는 좋은 예가 됩니다.\n\n\n이 사건은 세종대왕이 학자들에게 지식을 요구하고 학자들은 이를 해결하기 위해 노력하는 모습을 보여주는 좋은 예로, 학문과 왕의 지혜가 조화를 이룬 좋은 사례입니다. 또한, 이 사건은 우리나라의 역사에서 중요한 사건으로, 우리는 이 사건을 통해 세종대왕의 학문 지지 정책과 학자의 노력을 알 수 있습니다.\n\n\n이 사건이 일어난 후, 조선 왕조는 학문을 장려하는 정책을 펼쳤고, 조선 후기에는 학자들이 왕조를 지원받아 학문을 연구할 수 있는 환경이 조성되었습니다. 이 사건은 우리나라 학문 발전에 큰 영향을 미쳤습니다.\n\n\n이 사건은 세종대왕의 지혜와 학자들의 노력이 조화를 이룬 좋은 예입니다. 우리는 이 사건을 통해 세종대왕의 학문 지지 정책과 학자의 노력을 배워야 합니다.\npython\nmessages = [\n{\"role\": \"system\", \"content\": \"친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘.\"},\n{\"role\": \"user\", \"content\": \"나는 오늘 사과 5개를 갖고있어. 그리고 어제 나는 사과 2개를 먹었지. 내가 지금 가진 사과는 몇 개지?\"},\n]\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n사과 5개 중에서 2개를 먹었으니, 아직 남은 사과는 5개에서 2개를 뺀 3개입니다!\nyaml\nmodels:\n\n\n* model: meta-llama/Meta-Llama-3-8B\nno parameters necessary for base model\n======================================\n* model: meta-llama/Meta-Llama-3-8B-Instruct\nparameters:\ndensity: 0.8\nweight: 0.5\n* model: beomi/Llama-3-Open-Ko-8B\nparameters:\ndensity: 0.8\nweight: 0.5\nmerge\\_method: ties\nparameters:\nrescale: true\nint8\\_mask: true\nbase\\_model: meta-llama/Meta-Llama-3-8B\ndtype: bfloat16\n'''\n\n\n사족: dare-ties 결과가 더 좋아보이는데, 아마 density 차이 때문으로 보임."
] |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-16-16-0.442-finetuned-gtzan
This model is a fine-tuned version of [ast-finetuned-audioset-16-16-0.442](https://huggingface.co/ast-finetuned-audioset-16-16-0.442) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3315
- Accuracy: 0.93
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam-8bits with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8802 | 1.0 | 45 | 0.5267 | 0.85 |
| 0.3183 | 2.0 | 90 | 0.5893 | 0.81 |
| 0.1094 | 3.0 | 135 | 0.4421 | 0.89 |
| 0.0259 | 4.0 | 180 | 0.4100 | 0.88 |
| 0.0291 | 5.0 | 225 | 0.3695 | 0.9 |
| 0.0409 | 6.0 | 270 | 0.3071 | 0.91 |
| 0.0152 | 7.0 | 315 | 0.3482 | 0.92 |
| 0.0003 | 8.0 | 360 | 0.3187 | 0.94 |
| 0.0003 | 9.0 | 405 | 0.3258 | 0.93 |
| 0.0004 | 10.0 | 450 | 0.3315 | 0.93 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "datasets": ["marsyas/gtzan"], "metrics": ["accuracy"], "base_model": "MIT/ast-finetuned-audioset-16-16-0.442", "model-index": [{"name": "ast-finetuned-audioset-16-16-0.442-finetuned-gtzan", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "GTZAN", "type": "marsyas/gtzan", "config": "all", "split": "train", "args": "all"}, "metrics": [{"type": "accuracy", "value": 0.93, "name": "Accuracy"}]}]}]} | Ostixe360/ast-finetuned-audioset-16-16-0.442-finetuned-gtzan | null | [
"transformers",
"tensorboard",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-16-16-0.442",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:40:54+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #audio-spectrogram-transformer #audio-classification #generated_from_trainer #dataset-marsyas/gtzan #base_model-MIT/ast-finetuned-audioset-16-16-0.442 #model-index #endpoints_compatible #region-us
| ast-finetuned-audioset-16-16-0.442-finetuned-gtzan
==================================================
This model is a fine-tuned version of ast-finetuned-audioset-16-16-0.442 on the GTZAN dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3315
* Accuracy: 0.93
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 20
* eval\_batch\_size: 20
* seed: 42
* optimizer: Adam-8bits with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.2+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 20\n* eval\\_batch\\_size: 20\n* seed: 42\n* optimizer: Adam-8bits with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #audio-spectrogram-transformer #audio-classification #generated_from_trainer #dataset-marsyas/gtzan #base_model-MIT/ast-finetuned-audioset-16-16-0.442 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 20\n* eval\\_batch\\_size: 20\n* seed: 42\n* optimizer: Adam-8bits with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
question-answering | transformers |
# Model Card for Model ID
This model is a fine-tuned version of Llama-3-8B-Instruct on the BatchPrompting dataset, which spans 13 diverse NLP tasks. The model has been fine-tuned to effectively perform batch prompting - answering multiple questions concatenated into a single prompt in one inference pass.
## Model Details
This model is a fine-tuned version of Llama-3-8B-Instruct on the BatchPrompting dataset, which spans 13 diverse NLP tasks. The model has been fine-tuned to effectively perform batch prompting - answering multiple questions concatenated into a single prompt in one inference pass.
### Model Description
<!-- Provide a longer summary of what this model is. TODO-->
- **Developed by:** Alex Chandler, Sebastian Joseph
- **Model type:** Large Language Model (Llama-3 variant
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model [optional]:** Llama-3-8B-Instruct
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** Forthcoming
- **Paper:** Forthcoming
- **Demo:** Forthcoming
## Uses
### How to Use
Use with transformers
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "achandlr/Llama-3-8B-Instruct-BatchPromptQA"
# Load the model pipeline
pipeline = transformers.pipeline("text-generation", model=model_id)
# Generate text using the pipeline
generated_text = pipeline("Hey how are you doing today?")
print(generated_text)
```
### Direct Use
The model can be used for efficient question-answering on a variety of NLP tasks by concatenating multiple questions into a single prompt. It demonstrates strong generalization to unseen tasks and maintains performance with larger batch sizes compared to the non-fine-tuned model.
### Out-of-Scope Use
The model should not be used for tasks that may cause harm or for generating factually incorrect or biased content. Caution should be exercised if using the model for high-stakes decision making.
## Bias, Risks, and Limitations
The model may exhibit biases present in its pretraining data or the BatchPrompting dataset. It has not been extensively tested for fairness or potential misuse. Performance may degrade on out-of-distribution examples or tasks very dissimilar to the training data.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the model's potential limitations and biases. The model's outputs should be carefully monitored, especially when used for sensitive applications. More testing is needed to fully characterize its capabilities and shortcomings.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
The model was fine-tuned on our BatchPrompting dataset consisting of 13 NLP tasks:
- **GLUE Benchmark Tasks**: A collection of datasets used for evaluating the performance of models on a variety of natural language understanding tasks.
- **Mathematical Reasoning Datasets**:
- **GSM8K**: Focuses on numerical and logical reasoning challenges.
- **GSM8K-Hard**: Contains more complex problems from the GSM8K dataset.
- **CommonsenseQA**: Tests the model's commonsense reasoning ability through multiple-choice question answering.
- **RACE Reading Comprehension Dataset**: Consists of passages and questions designed to assess reading comprehension, derived from English exams.
### Training Procedure
The model was fine-tuned using the LoRA method.
#### Training Hyperparameters
- **Training regime:** Forthcoming <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
Forthcoming
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Testing Data, Factors & Metrics
Evaluation was performed on tasks that were excluded from the training run. Key metrics included accuracy and BatchPrompt error rate (failure to answer a question or conform to the specified format).
A table of our results is forthcoming.
### Testing Data, Factors & Metrics
Forthcoming
#### Testing Data
Forthcoming
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Metrics
Forthcoming
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
Forthcoming
[More Information Needed]
#### Summary
Forthcoming
## Model Examination [optional]
Forthcoming
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
## Environmental Impact
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
-->
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation
Forthcoming
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
Forthcoming
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["batch prompting", "batch", "BatchPrompt", "BatchPrompting", "GLUE", "Llama", "fine-tuned", "Llama3", "Llama-3-8B-Instruct"], "datasets": ["achandlr/BatchPrompting"], "metrics": ["accuracy"], "pipeline_tag": "question-answering"} | achandlr/Llama-3-8B-Instruct-BatchPromptQA | null | [
"transformers",
"safetensors",
"batch prompting",
"batch",
"BatchPrompt",
"BatchPrompting",
"GLUE",
"Llama",
"fine-tuned",
"Llama3",
"Llama-3-8B-Instruct",
"question-answering",
"en",
"dataset:achandlr/BatchPrompting",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:41:11+00:00 | [
"1910.09700"
] | [
"en"
] | TAGS
#transformers #safetensors #batch prompting #batch #BatchPrompt #BatchPrompting #GLUE #Llama #fine-tuned #Llama3 #Llama-3-8B-Instruct #question-answering #en #dataset-achandlr/BatchPrompting #arxiv-1910.09700 #license-mit #endpoints_compatible #region-us
|
# Model Card for Model ID
This model is a fine-tuned version of Llama-3-8B-Instruct on the BatchPrompting dataset, which spans 13 diverse NLP tasks. The model has been fine-tuned to effectively perform batch prompting - answering multiple questions concatenated into a single prompt in one inference pass.
## Model Details
This model is a fine-tuned version of Llama-3-8B-Instruct on the BatchPrompting dataset, which spans 13 diverse NLP tasks. The model has been fine-tuned to effectively perform batch prompting - answering multiple questions concatenated into a single prompt in one inference pass.
### Model Description
- Developed by: Alex Chandler, Sebastian Joseph
- Model type: Large Language Model (Llama-3 variant
- Language(s) (NLP): English
- License: MIT
- Finetuned from model [optional]: Llama-3-8B-Instruct
### Model Sources [optional]
- Repository: Forthcoming
- Paper: Forthcoming
- Demo: Forthcoming
## Uses
### How to Use
Use with transformers
See the snippet below for usage with Transformers:
### Direct Use
The model can be used for efficient question-answering on a variety of NLP tasks by concatenating multiple questions into a single prompt. It demonstrates strong generalization to unseen tasks and maintains performance with larger batch sizes compared to the non-fine-tuned model.
### Out-of-Scope Use
The model should not be used for tasks that may cause harm or for generating factually incorrect or biased content. Caution should be exercised if using the model for high-stakes decision making.
## Bias, Risks, and Limitations
The model may exhibit biases present in its pretraining data or the BatchPrompting dataset. It has not been extensively tested for fairness or potential misuse. Performance may degrade on out-of-distribution examples or tasks very dissimilar to the training data.
### Recommendations
Users should be made aware of the model's potential limitations and biases. The model's outputs should be carefully monitored, especially when used for sensitive applications. More testing is needed to fully characterize its capabilities and shortcomings.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
The model was fine-tuned on our BatchPrompting dataset consisting of 13 NLP tasks:
- GLUE Benchmark Tasks: A collection of datasets used for evaluating the performance of models on a variety of natural language understanding tasks.
- Mathematical Reasoning Datasets:
- GSM8K: Focuses on numerical and logical reasoning challenges.
- GSM8K-Hard: Contains more complex problems from the GSM8K dataset.
- CommonsenseQA: Tests the model's commonsense reasoning ability through multiple-choice question answering.
- RACE Reading Comprehension Dataset: Consists of passages and questions designed to assess reading comprehension, derived from English exams.
### Training Procedure
The model was fine-tuned using the LoRA method.
#### Training Hyperparameters
- Training regime: Forthcoming
#### Speeds, Sizes, Times [optional]
Forthcoming
## Evaluation
Testing Data, Factors & Metrics
Evaluation was performed on tasks that were excluded from the training run. Key metrics included accuracy and BatchPrompt error rate (failure to answer a question or conform to the specified format).
A table of our results is forthcoming.
### Testing Data, Factors & Metrics
Forthcoming
#### Testing Data
Forthcoming
#### Metrics
Forthcoming
### Results
Forthcoming
#### Summary
Forthcoming
## Model Examination [optional]
Forthcoming
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
Forthcoming
BibTeX:
Forthcoming
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\nThis model is a fine-tuned version of Llama-3-8B-Instruct on the BatchPrompting dataset, which spans 13 diverse NLP tasks. The model has been fine-tuned to effectively perform batch prompting - answering multiple questions concatenated into a single prompt in one inference pass.",
"## Model Details\n\nThis model is a fine-tuned version of Llama-3-8B-Instruct on the BatchPrompting dataset, which spans 13 diverse NLP tasks. The model has been fine-tuned to effectively perform batch prompting - answering multiple questions concatenated into a single prompt in one inference pass.",
"### Model Description\n\n\n\n\n\n- Developed by: Alex Chandler, Sebastian Joseph\n- Model type: Large Language Model (Llama-3 variant\n- Language(s) (NLP): English\n- License: MIT\n- Finetuned from model [optional]: Llama-3-8B-Instruct",
"### Model Sources [optional]\n\n\n\n- Repository: Forthcoming\n- Paper: Forthcoming\n- Demo: Forthcoming",
"## Uses",
"### How to Use\nUse with transformers\nSee the snippet below for usage with Transformers:",
"### Direct Use\n\nThe model can be used for efficient question-answering on a variety of NLP tasks by concatenating multiple questions into a single prompt. It demonstrates strong generalization to unseen tasks and maintains performance with larger batch sizes compared to the non-fine-tuned model.",
"### Out-of-Scope Use\n\nThe model should not be used for tasks that may cause harm or for generating factually incorrect or biased content. Caution should be exercised if using the model for high-stakes decision making.",
"## Bias, Risks, and Limitations\n\nThe model may exhibit biases present in its pretraining data or the BatchPrompting dataset. It has not been extensively tested for fairness or potential misuse. Performance may degrade on out-of-distribution examples or tasks very dissimilar to the training data.",
"### Recommendations\n\n\n\nUsers should be made aware of the model's potential limitations and biases. The model's outputs should be carefully monitored, especially when used for sensitive applications. More testing is needed to fully characterize its capabilities and shortcomings.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data\n\nThe model was fine-tuned on our BatchPrompting dataset consisting of 13 NLP tasks:\n - GLUE Benchmark Tasks: A collection of datasets used for evaluating the performance of models on a variety of natural language understanding tasks.\n - Mathematical Reasoning Datasets:\n - GSM8K: Focuses on numerical and logical reasoning challenges.\n - GSM8K-Hard: Contains more complex problems from the GSM8K dataset.\n - CommonsenseQA: Tests the model's commonsense reasoning ability through multiple-choice question answering.\n - RACE Reading Comprehension Dataset: Consists of passages and questions designed to assess reading comprehension, derived from English exams.",
"### Training Procedure\n\nThe model was fine-tuned using the LoRA method.",
"#### Training Hyperparameters\n\n- Training regime: Forthcoming",
"#### Speeds, Sizes, Times [optional]\n\n\nForthcoming",
"## Evaluation\n\n\nTesting Data, Factors & Metrics\nEvaluation was performed on tasks that were excluded from the training run. Key metrics included accuracy and BatchPrompt error rate (failure to answer a question or conform to the specified format).\nA table of our results is forthcoming.",
"### Testing Data, Factors & Metrics\nForthcoming",
"#### Testing Data\nForthcoming",
"#### Metrics\nForthcoming",
"### Results\nForthcoming",
"#### Summary\nForthcoming",
"## Model Examination [optional]\nForthcoming",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\nForthcoming\n\n\nBibTeX:\n\nForthcoming\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #batch prompting #batch #BatchPrompt #BatchPrompting #GLUE #Llama #fine-tuned #Llama3 #Llama-3-8B-Instruct #question-answering #en #dataset-achandlr/BatchPrompting #arxiv-1910.09700 #license-mit #endpoints_compatible #region-us \n",
"# Model Card for Model ID\n\nThis model is a fine-tuned version of Llama-3-8B-Instruct on the BatchPrompting dataset, which spans 13 diverse NLP tasks. The model has been fine-tuned to effectively perform batch prompting - answering multiple questions concatenated into a single prompt in one inference pass.",
"## Model Details\n\nThis model is a fine-tuned version of Llama-3-8B-Instruct on the BatchPrompting dataset, which spans 13 diverse NLP tasks. The model has been fine-tuned to effectively perform batch prompting - answering multiple questions concatenated into a single prompt in one inference pass.",
"### Model Description\n\n\n\n\n\n- Developed by: Alex Chandler, Sebastian Joseph\n- Model type: Large Language Model (Llama-3 variant\n- Language(s) (NLP): English\n- License: MIT\n- Finetuned from model [optional]: Llama-3-8B-Instruct",
"### Model Sources [optional]\n\n\n\n- Repository: Forthcoming\n- Paper: Forthcoming\n- Demo: Forthcoming",
"## Uses",
"### How to Use\nUse with transformers\nSee the snippet below for usage with Transformers:",
"### Direct Use\n\nThe model can be used for efficient question-answering on a variety of NLP tasks by concatenating multiple questions into a single prompt. It demonstrates strong generalization to unseen tasks and maintains performance with larger batch sizes compared to the non-fine-tuned model.",
"### Out-of-Scope Use\n\nThe model should not be used for tasks that may cause harm or for generating factually incorrect or biased content. Caution should be exercised if using the model for high-stakes decision making.",
"## Bias, Risks, and Limitations\n\nThe model may exhibit biases present in its pretraining data or the BatchPrompting dataset. It has not been extensively tested for fairness or potential misuse. Performance may degrade on out-of-distribution examples or tasks very dissimilar to the training data.",
"### Recommendations\n\n\n\nUsers should be made aware of the model's potential limitations and biases. The model's outputs should be carefully monitored, especially when used for sensitive applications. More testing is needed to fully characterize its capabilities and shortcomings.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data\n\nThe model was fine-tuned on our BatchPrompting dataset consisting of 13 NLP tasks:\n - GLUE Benchmark Tasks: A collection of datasets used for evaluating the performance of models on a variety of natural language understanding tasks.\n - Mathematical Reasoning Datasets:\n - GSM8K: Focuses on numerical and logical reasoning challenges.\n - GSM8K-Hard: Contains more complex problems from the GSM8K dataset.\n - CommonsenseQA: Tests the model's commonsense reasoning ability through multiple-choice question answering.\n - RACE Reading Comprehension Dataset: Consists of passages and questions designed to assess reading comprehension, derived from English exams.",
"### Training Procedure\n\nThe model was fine-tuned using the LoRA method.",
"#### Training Hyperparameters\n\n- Training regime: Forthcoming",
"#### Speeds, Sizes, Times [optional]\n\n\nForthcoming",
"## Evaluation\n\n\nTesting Data, Factors & Metrics\nEvaluation was performed on tasks that were excluded from the training run. Key metrics included accuracy and BatchPrompt error rate (failure to answer a question or conform to the specified format).\nA table of our results is forthcoming.",
"### Testing Data, Factors & Metrics\nForthcoming",
"#### Testing Data\nForthcoming",
"#### Metrics\nForthcoming",
"### Results\nForthcoming",
"#### Summary\nForthcoming",
"## Model Examination [optional]\nForthcoming",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\nForthcoming\n\n\nBibTeX:\n\nForthcoming\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | SamaahKhan/flan-before-fine-tuning | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:41:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_1ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.4651
- eval_runtime: 2.8947
- eval_samples_per_second: 69.092
- eval_steps_per_second: 8.637
- epoch: 0.9984
- step: 78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_1ep", "results": []}]} | mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_1ep | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-24T18:41:41+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_1ep
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.4651
- eval_runtime: 2.8947
- eval_samples_per_second: 69.092
- eval_steps_per_second: 8.637
- epoch: 0.9984
- step: 78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | [
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_1ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 3.4651\n- eval_runtime: 2.8947\n- eval_samples_per_second: 69.092\n- eval_steps_per_second: 8.637\n- epoch: 0.9984\n- step: 78",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_1ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 3.4651\n- eval_runtime: 2.8947\n- eval_samples_per_second: 69.092\n- eval_steps_per_second: 8.637\n- epoch: 0.9984\n- step: 78",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] |
null | transformers |
# Uploaded model
- **Developed by:** FeinFein
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | FeinFein/llama3_police | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:42:38+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: FeinFein
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: FeinFein\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: FeinFein\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TroyDoesAI/Mermaid-Llama-3-4B-Pruned
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q2_K.gguf) | Q2_K | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.IQ3_XS.gguf) | IQ3_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q3_K_S.gguf) | Q3_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.IQ3_S.gguf) | IQ3_S | 2.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.IQ3_M.gguf) | IQ3_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q3_K_M.gguf) | Q3_K_M | 2.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q3_K_L.gguf) | Q3_K_L | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.IQ4_XS.gguf) | IQ4_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q4_K_S.gguf) | Q4_K_S | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q4_K_M.gguf) | Q4_K_M | 2.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q5_K_S.gguf) | Q5_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q5_K_M.gguf) | Q5_K_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q6_K.gguf) | Q6_K | 3.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.Q8_0.gguf) | Q8_0 | 4.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-4B-Pruned.f16.gguf) | f16 | 9.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-4.0", "library_name": "transformers", "base_model": "TroyDoesAI/Mermaid-Llama-3-4B-Pruned", "quantized_by": "mradermacher"} | mradermacher/Mermaid-Llama-3-4B-Pruned-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/Mermaid-Llama-3-4B-Pruned",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:43:26+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-TroyDoesAI/Mermaid-Llama-3-4B-Pruned #license-cc-by-4.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-TroyDoesAI/Mermaid-Llama-3-4B-Pruned #license-cc-by-4.0 #endpoints_compatible #region-us \n"
] |
graph-ml | null | # PreMode
This is the repository for our manuscript "PreMode predicts mode-of-action of missense variants by deep graph representation learning of protein sequence and structural context" posted on bioRxiv: https://www.biorxiv.org/content/10.1101/2024.02.20.581321v3
# Data
Unzip the files with this script:
```
bash unzip.files.sh
```
Unfortunately we are not allowed to share the HGMD data, so in the `data.files/pretrain/training.*` files we removed all the pathogenic variants from HGMD (49218 in total). This might affect the plots of `analysis/figs/fig.sup.12.pdf` and `analysis/figs/fig.sup.13.pdf` if you re-run the R codes in `analysis/` folder.
We shared the trained weights of our models trained using HGMD instead.
# Install Packages
Please install the necessary packages using
```
mamba env create -f PreMode.yaml
mamba env create -f r4-base.yaml
```
You can check the installation by running
```
conda activate PreMode
python train.py --conf scripts/TEST.yaml --mode train
```
If no error occurs, it means successful installation.
# New Experiment
## Start from scratch and use our G/LoF datasets
1. Please prepare a folder under `scripts/` and create a file named `pretrain.seed.0.yaml` inside the folder, check `scripts/PreMode/pretrain.seed.0.yaml` for example.
2. Run pretrain in pathogenicity task:
```
python train.py --conf scripts/NEW_FOLDER/pretrain.seed.0.yaml
```
3. Prepare transfer learning config files:
```
bash scripts/DMS.prepare.yaml.sh scripts/NEW_FOLDER/
```
4. Run transfer learning:
```
bash scripts/DMS.5fold.run.sh scripts/NEW_FOLDER TASK_NAME GPU_ID
```
If you have multiple tasks, just separate each task by comma in the TASK_NAME, like "task_1,task_2,task_3".
5. (Optional) To reuse the transfer learning tasks in our paper using 8 GPU cards, just do
```
bash transfer.all.sh scripts/NEW_FOLDER
```
If you only have one GPU card, then do
```
bash transfer.all.in.one.card.sh scripts/NEW_FOLDER
```
6. Save inference results:
```
bash scripts/DMS.5fold.inference.sh scripts/NEW_FOLDER analysis/NEW_FOLDER TASK_NAME GPU_ID
```
7. You'll get a folder `analysis/NEW_FOLDER/TASK_NAME` with 5 `.csv` files, each file has 4 columns `logits.FOLD.[0-3]`. Each column represent the G/LoF prediction at one cross-validation (closer to 0 means more likely GoF, closer to 1 means more likely LoF). We suggest averaging the predictions at 4 columns.
## Only transfer learning, user defined mode-of-action datasets
1. Prepare a `.csv` file for training and inference, there are two accepted formats:
+ Format 1 (only for missense variants):
| uniprotID | aaChg | score | ENST |
| :-: | :-: | :-: | :-: |
| P15056 | p.V600E | 1 | ENST00000646891 |
| P15056 | p.G446V | -1 | ENST00000646891 |
+ `uniprotID`: the uniprot ID of the protein.
+ `aaChg`: the amino acid change induced by missense variant.
+ `score`: 1 for GoF, -1 for LoF. For inference it is not required. For DMS, this could be experimental readouts. If you have multiplexed assays, you can change it to `score.1, score.2, score.3, ..., score.N`.
+ `ENST` (optional): the ensemble transcript ID that matched the uniprotID.
+ Format 2 (can be missense variant or multiple variants):
| uniprotID | ref | alt | pos.orig | score | ENST | wt.orig | sequence.len.orig
| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| P15056 | V | E | 600 | 1 | ENST00000646891 | ... | 766 |
| P15056 | G | V | 446 | -1 | ENST00000646891 | ... | 766 |
| P15056 | G;V | V;F | 446;471 | -1 | ENST00000646891 | ... | 766 |
+ `uniprotID`: the uniprot ID of the protein.
+ `ref`: the reference amino acid, if multiple variants, separated by ";".
+ `alt`: the alternative, if multiple variants, separated by ";" in the same order of "ref".
+ `pos.orig`: the amino acid change position, if multiple variants, separated by ";" in the same order of "ref".
+ `score`: same as above.
+ `ENST` (optional): same as above.
+ `wt.orig`: the wild type protein sequence, in the uniprot format.
+ `sequence.len.orig`: the wild type protein sequence length.
+ If you prepared your input in Format 1, please run
```
bash parse.input.table/parse.input.table.sh YOUR_FILE TRANSFORMED_FILE
```
to transform it to Format 2, note it will drop some lines if your aaChg doesn't match the corresponding alphafold sequence.
2. Prepare a config file for training the model and inference.
```
bash scripts/prepare.new.task.yaml.sh PRETRAIN_MODEL_NAME YOUR_TASK_NAME YOUR_TRAINING_FILE YOUR_INFERENCE_FILE TASK_TYPE MODE_OF_ACTION_N
```
+ `PRETRAIN_MODEL_NAME` could be one of the following:
+ `scripts/PreMode`: Default PreMode
+ `scripts/PreMode.ptm`: PreMode + ptm as input
+ `scripts/PreMode.noStructure`: PreMode without structure input
+ `scripts/PreMode.noESM`: PreMode, replaced ESM2 input with one-hot encodings of 20 AAs.
+ `scripts/PreMode.noMSA`: PreMode without MSA input
+ `scripts/ESM.SLP`: ESM embedding + Single Layer Perceptron
+ `YOUR_TASK_NAME` can be anything on your preference
+ `YOUR_TRAINING_FILE` is the training file you prepared in step 1.
+ `YOUR_INFERENCE_FILE` is the inference file you prepared in step 1.
+ `TASK_TYPE` could be `DMS` or `GLOF`.
+ `MODE_OF_ACTION_N` The number of dimensions of mode-of-action. For `GLOF` this is usually 1. For multiplexed `DMS` dataset, this could be the number of biochemical properties measured. Note that if it is larger than 1, then you have to make sure the `score` column in step 1 is replaced to `score.1, score.2, ..., score.N` correspondingly.
3. Run your config file
```
conda activate PreMode
bash scripts/run.new.task.sh PRETRAIN_MODEL_NAME YOUR_TASK_NAME OUTPUT_FOLDER GPU_ID
```
This should take ~30min on a NVIDIA A40 GPU depending on your data set size.
4. You'll get a file in the `OUTPUT_FOLDER` named as `YOUR_TASK_NAME.inference.result.csv`.
+ If your `TASK_TYPE` is `GLOF`, then the column `logits` will be the inference results. Closer to 0 means GoF, closer to 1 means LoF.
+ If your `TASK_TYPE` is `DMS` and `MODE_OF_ACTION_N` is 1, then the column `logits` will be the inference results. If your `MODE_OF_ACTION_N` is larger than 1, then you will get multiple columns of `logits.*`, each represent a predicted DMS measurement.
# Models & Figures in our manuscript
## Pretrained Models
Here is the list of models in our manuscript:
`scripts/PreMode/` PreMode, it takes 250 GB RAM and 4 A40 Nvidia GPUs to run, will finish in ~50h.
`scripts/ESM.SLR/` Baseline Model, ESM2 (650M) + Single Layer Perceptron
`scripts/PreMode.large.window/` PreMode, window size set to 1251 AA.
`scripts/PreMode.noESM/` PreMode, replace the ESM2 embeddings to one hot encodings of 20 AA.
`scripts/PreMode.noMSA/` PreMode, remove the MSA input.
`scripts/PreMode.noPretrain/` PreMode, but didn't pretrain on ClinVar/HGMD.
`scripts/PreMode.noStructure/` PreMode, remove the AF2 predicted structure input.
`scripts/PreMode.ptm/` PreMode, add the onehot encoding of post transcriptional modification sites as input.
`scripts/PreMode.mean.var/` PreMode, it will output both predicted value (mean) and confidence (var), used in adaptive learning tasks.
## Predicted mode-of-action
| gene | file |
| :-: | :-: |
| BRAF | `analysis/5genes.all.mut/PreMode/P15056.logits.csv` |
| RET | `analysis/5genes.all.mut/PreMode/P07949.logits.csv` |
| TP53 | `analysis/5genes.all.mut/PreMode/P04637.logits.csv` |
| KCNJ11 | `analysis/5genes.all.mut/PreMode/Q14654.logits.csv` |
| CACNA1A | `analysis/5genes.all.mut/PreMode/O00555.logits.csv` |
| SCN5A | `analysis/5genes.all.mut/PreMode/Q14524.logits.csv` |
| SCN2A | `analysis/5genes.all.mut/PreMode/Q99250.logits.csv` |
| ABCC8 | `analysis/5genes.all.mut/PreMode/Q09428.logits.csv` |
| PTEN | `analysis/5genes.all.mut/PreMode/P60484.logits.csv` |
For each file, column `logits.0` is predicted pathogenicity. column `logits.1` is predicted LoF probability, `logits.2` is predicted GoF probability.
For PTEN, column `logits.1` is predicted stability, 0 is loss, 1 is neutral, `logits.2` is predicted enzyme activity, 0 is loss, 1 is neutral
## Figures
Please go to `analysis/` folder and run the corresponding R scripts. | {"language": ["en"], "tags": ["biology"], "pipeline_tag": "graph-ml"} | gzhong/PreMode | null | [
"biology",
"graph-ml",
"en",
"region:us"
] | null | 2024-04-24T18:43:56+00:00 | [] | [
"en"
] | TAGS
#biology #graph-ml #en #region-us
| PreMode
=======
This is the repository for our manuscript "PreMode predicts mode-of-action of missense variants by deep graph representation learning of protein sequence and structural context" posted on bioRxiv: URL
Data
====
Unzip the files with this script:
Unfortunately we are not allowed to share the HGMD data, so in the 'URL files we removed all the pathogenic variants from HGMD (49218 in total). This might affect the plots of 'analysis/figs/URL' and 'analysis/figs/URL' if you re-run the R codes in 'analysis/' folder.
We shared the trained weights of our models trained using HGMD instead.
Install Packages
================
Please install the necessary packages using
You can check the installation by running
If no error occurs, it means successful installation.
New Experiment
==============
Start from scratch and use our G/LoF datasets
---------------------------------------------
1. Please prepare a folder under 'scripts/' and create a file named 'URL.0.yaml' inside the folder, check 'scripts/PreMode/URL.0.yaml' for example.
2. Run pretrain in pathogenicity task:
3. Prepare transfer learning config files:
4. Run transfer learning:
If you have multiple tasks, just separate each task by comma in the TASK\_NAME, like "task\_1,task\_2,task\_3".
5. (Optional) To reuse the transfer learning tasks in our paper using 8 GPU cards, just do
If you only have one GPU card, then do
6. Save inference results:
7. You'll get a folder 'analysis/NEW\_FOLDER/TASK\_NAME' with 5 '.csv' files, each file has 4 columns 'logits.FOLD.[0-3]'. Each column represent the G/LoF prediction at one cross-validation (closer to 0 means more likely GoF, closer to 1 means more likely LoF). We suggest averaging the predictions at 4 columns.
Only transfer learning, user defined mode-of-action datasets
------------------------------------------------------------
1. Prepare a '.csv' file for training and inference, there are two accepted formats:
* Format 1 (only for missense variants):
+ 'uniprotID': the uniprot ID of the protein.
+ 'aaChg': the amino acid change induced by missense variant.
+ 'score': 1 for GoF, -1 for LoF. For inference it is not required. For DMS, this could be experimental readouts. If you have multiplexed assays, you can change it to 'score.1, score.2, score.3, ..., score.N'.
+ 'ENST' (optional): the ensemble transcript ID that matched the uniprotID.
* Format 2 (can be missense variant or multiple variants):
+ 'uniprotID': the uniprot ID of the protein.
+ 'ref': the reference amino acid, if multiple variants, separated by ";".
+ 'alt': the alternative, if multiple variants, separated by ";" in the same order of "ref".
+ 'URL': the amino acid change position, if multiple variants, separated by ";" in the same order of "ref".
+ 'score': same as above.
+ 'ENST' (optional): same as above.
+ 'URL': the wild type protein sequence, in the uniprot format.
+ 'URL': the wild type protein sequence length.
* If you prepared your input in Format 1, please run
to transform it to Format 2, note it will drop some lines if your aaChg doesn't match the corresponding alphafold sequence.
2. Prepare a config file for training the model and inference.
* 'PRETRAIN\_MODEL\_NAME' could be one of the following:
+ 'scripts/PreMode': Default PreMode
+ 'scripts/URL': PreMode + ptm as input
+ 'scripts/PreMode.noStructure': PreMode without structure input
+ 'scripts/PreMode.noESM': PreMode, replaced ESM2 input with one-hot encodings of 20 AAs.
+ 'scripts/PreMode.noMSA': PreMode without MSA input
+ 'scripts/ESM.SLP': ESM embedding + Single Layer Perceptron
* 'YOUR\_TASK\_NAME' can be anything on your preference
* 'YOUR\_TRAINING\_FILE' is the training file you prepared in step 1.
* 'YOUR\_INFERENCE\_FILE' is the inference file you prepared in step 1.
* 'TASK\_TYPE' could be 'DMS' or 'GLOF'.
* 'MODE\_OF\_ACTION\_N' The number of dimensions of mode-of-action. For 'GLOF' this is usually 1. For multiplexed 'DMS' dataset, this could be the number of biochemical properties measured. Note that if it is larger than 1, then you have to make sure the 'score' column in step 1 is replaced to 'score.1, score.2, ..., score.N' correspondingly.
3. Run your config file
This should take ~30min on a NVIDIA A40 GPU depending on your data set size.
4. You'll get a file in the 'OUTPUT\_FOLDER' named as 'YOUR\_TASK\_NAME.URL'.
* If your 'TASK\_TYPE' is 'GLOF', then the column 'logits' will be the inference results. Closer to 0 means GoF, closer to 1 means LoF.
* If your 'TASK\_TYPE' is 'DMS' and 'MODE\_OF\_ACTION\_N' is 1, then the column 'logits' will be the inference results. If your 'MODE\_OF\_ACTION\_N' is larger than 1, then you will get multiple columns of 'logits.\*', each represent a predicted DMS measurement.
Models & Figures in our manuscript
==================================
Pretrained Models
-----------------
Here is the list of models in our manuscript:
'scripts/PreMode/' PreMode, it takes 250 GB RAM and 4 A40 Nvidia GPUs to run, will finish in ~50h.
'scripts/ESM.SLR/' Baseline Model, ESM2 (650M) + Single Layer Perceptron
'scripts/URL PreMode, window size set to 1251 AA.
'scripts/PreMode.noESM/' PreMode, replace the ESM2 embeddings to one hot encodings of 20 AA.
'scripts/PreMode.noMSA/' PreMode, remove the MSA input.
'scripts/PreMode.noPretrain/' PreMode, but didn't pretrain on ClinVar/HGMD.
'scripts/PreMode.noStructure/' PreMode, remove the AF2 predicted structure input.
'scripts/URL PreMode, add the onehot encoding of post transcriptional modification sites as input.
'scripts/URL PreMode, it will output both predicted value (mean) and confidence (var), used in adaptive learning tasks.
Predicted mode-of-action
------------------------
For each file, column 'logits.0' is predicted pathogenicity. column 'logits.1' is predicted LoF probability, 'logits.2' is predicted GoF probability.
For PTEN, column 'logits.1' is predicted stability, 0 is loss, 1 is neutral, 'logits.2' is predicted enzyme activity, 0 is loss, 1 is neutral
Figures
-------
Please go to 'analysis/' folder and run the corresponding R scripts.
| [] | [
"TAGS\n#biology #graph-ml #en #region-us \n"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | boringblobking/medmcqa1 | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:46:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
GGUF-IQ-Imatrix quants for [jeiku/Average_Normie_l3_v1_8B](https://huggingface.co/jeiku/Average_Normie_l3_v1_8B).
> [!WARNING]
> Compatible SillyTavern presets [here (simple)](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here (Virt's)](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
> Use the latest version of KoboldCpp. **Use the provided presets.** <br>
> This is all still highly experimental, let the authors know how it performs for you, feedback is more important than ever now.
> [!NOTE]
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes.
**Original model information:**
# Average Normie v1

A model by an average normie for the average normie.
This model is a stock merge of the following models:
https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3
https://huggingface.co/Sao10K/L3-Solana-8B-v1
https://huggingface.co/ResplendentAI/Kei_Llama3_8B
The final merge then had the following LoRA applied over it:
https://huggingface.co/ResplendentAI/Theory_of_Mind_Llama3
This should be an intelligent and adept roleplaying model. | {"language": ["en"], "tags": ["roleplay", "llama3", "sillytavern"]} | Lewdiculous/Average_Normie_l3_v1_8B-GGUF-IQ-Imatrix | null | [
"gguf",
"roleplay",
"llama3",
"sillytavern",
"en",
"region:us"
] | null | 2024-04-24T18:46:33+00:00 | [] | [
"en"
] | TAGS
#gguf #roleplay #llama3 #sillytavern #en #region-us
|
GGUF-IQ-Imatrix quants for jeiku/Average_Normie_l3_v1_8B.
> [!WARNING]
> Compatible SillyTavern presets here (simple) or here (Virt's). <br>
> Use the latest version of KoboldCpp. Use the provided presets. <br>
> This is all still highly experimental, let the authors know how it performs for you, feedback is more important than ever now.
> [!NOTE]
> For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.
Original model information:
# Average Normie v1
!image/png
A model by an average normie for the average normie.
This model is a stock merge of the following models:
URL
URL
URL
The final merge then had the following LoRA applied over it:
URL
This should be an intelligent and adept roleplaying model. | [
"# Average Normie v1\n\n!image/png\n\nA model by an average normie for the average normie. \n\nThis model is a stock merge of the following models:\n\nURL\n\nURL\n\nURL\n\nThe final merge then had the following LoRA applied over it:\n\nURL\n\nThis should be an intelligent and adept roleplaying model."
] | [
"TAGS\n#gguf #roleplay #llama3 #sillytavern #en #region-us \n",
"# Average Normie v1\n\n!image/png\n\nA model by an average normie for the average normie. \n\nThis model is a stock merge of the following models:\n\nURL\n\nURL\n\nURL\n\nThe final merge then had the following LoRA applied over it:\n\nURL\n\nThis should be an intelligent and adept roleplaying model."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1804
- Bleu: 0.225
- Gen Len: 18.1268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.6472 | 1.0 | 1617 | 3.2646 | 0.1867 | 18.1246 |
| 3.5198 | 2.0 | 3234 | 3.1804 | 0.225 | 18.1268 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]} | liamvbetts/my_awesome_opus_books_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:47:13+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_opus\_books\_model
===============================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1804
* Bleu: 0.225
* Gen Len: 18.1268
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_2ep
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.2777
- eval_runtime: 2.8582
- eval_samples_per_second: 69.975
- eval_steps_per_second: 8.747
- epoch: 1.9968
- step: 156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_2ep", "results": []}]} | mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_2ep | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-04-24T18:47:29+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_2ep
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.2777
- eval_runtime: 2.8582
- eval_samples_per_second: 69.975
- eval_steps_per_second: 8.747
- epoch: 1.9968
- step: 156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- PEFT 0.9.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1 | [
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_2ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 3.2777\n- eval_runtime: 2.8582\n- eval_samples_per_second: 69.975\n- eval_steps_per_second: 8.747\n- epoch: 1.9968\n- step: 156",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_Lora_lr2e-6_2ep\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 3.2777\n- eval_runtime: 2.8582\n- eval_samples_per_second: 69.975\n- eval_steps_per_second: 8.747\n- epoch: 1.9968\n- step: 156",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.19.1"
] |
image-to-text | transformers | [Evaluation on chexpert-plus](https://github.com/Stanford-AIMI/chexpert-plus)
| {"language": "en", "license": "mit", "library_name": "transformers", "tags": ["image-to-text"], "widget": [{"src": "https://huggingface.co/IAMJB/interpret-cxr-impression-baseline/resolve/main/effusions-bibasal.jpg"}, {"src": "https://huggingface.co/IAMJB/interpret-cxr-impression-baseline/resolve/main/Chest-X-ray-taken-on-2-nd-day-of-admission-in-the_Q320.jpg"}, {"src": "https://huggingface.co/IAMJB/interpret-cxr-impression-baseline/resolve/main/effusions-bibasal.jpg"}]} | IAMJB/mimic-cxr-findings-baseline | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:47:58+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #vision-encoder-decoder #image-to-text #en #license-mit #endpoints_compatible #region-us
| Evaluation on chexpert-plus
| [] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #image-to-text #en #license-mit #endpoints_compatible #region-us \n"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "246.90 +/- 19.00", "name": "mean_reward", "verified": false}]}]}]} | yosthin06/ppo-LunarLander-v2-yosthin | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T18:48:11+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llava_clip_llama3_8b_finetune_8192
This model is a fine-tuned version of [MFuyu/llava_clip_llama3_8b_pretrain_8192](https://huggingface.co/MFuyu/llava_clip_llama3_8b_pretrain_8192) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "MFuyu/llava_clip_llama3_8b_pretrain_8192", "model-index": [{"name": "llava_clip_llama3_8b_finetune_8192", "results": []}]} | MFuyu/llava_clip_llama3_8b_finetune_8192 | null | [
"transformers",
"safetensors",
"llava",
"pretraining",
"generated_from_trainer",
"base_model:MFuyu/llava_clip_llama3_8b_pretrain_8192",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:49:12+00:00 | [] | [] | TAGS
#transformers #safetensors #llava #pretraining #generated_from_trainer #base_model-MFuyu/llava_clip_llama3_8b_pretrain_8192 #endpoints_compatible #region-us
|
# llava_clip_llama3_8b_finetune_8192
This model is a fine-tuned version of MFuyu/llava_clip_llama3_8b_pretrain_8192 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# llava_clip_llama3_8b_finetune_8192\n\nThis model is a fine-tuned version of MFuyu/llava_clip_llama3_8b_pretrain_8192 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 16\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 128\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.2\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #llava #pretraining #generated_from_trainer #base_model-MFuyu/llava_clip_llama3_8b_pretrain_8192 #endpoints_compatible #region-us \n",
"# llava_clip_llama3_8b_finetune_8192\n\nThis model is a fine-tuned version of MFuyu/llava_clip_llama3_8b_pretrain_8192 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 16\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 128\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.2\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_4iters_bs256_nodpo_sample2_iter_3
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2](https://huggingface.co/ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2", "model-index": [{"name": "0.001_ablation_4iters_bs256_nodpo_sample2_iter_3", "results": []}]} | ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T18:50:04+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_ablation_4iters_bs256_nodpo_sample2_iter_3
This model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.001_ablation_4iters_bs256_nodpo_sample2_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_ablation_4iters_bs256_nodpo_sample2_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_nodpo_sample2_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** alberthtan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | alberthtan/lora_model_1065_samples | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T18:52:48+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: alberthtan
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: alberthtan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: alberthtan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.