modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-26 12:28:17
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-26 12:22:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mucai/vip-llava-7b
|
mucai
| 2023-12-17T23:42:47Z | 3,375 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llava",
"text-generation",
"arxiv:2312.00784",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-12-03T18:19:47Z |
---
inference: false
---
<br>
<br>
# ViP-LLaVA Model Card
## Model details
**Model type:**
ViP-LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on both image level instruction data and region-level instruction data annotated with visual prompts.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
ViP-LLaVA-7B was trained in November 2023. [Paper](https://arxiv.org/abs/2312.00784)
**Paper or resources for more information:**
https://vip-llava.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/mu-cai/ViP-LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of ViP-LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 665K image level instruction data from LLaVA-1.5.
- 520K image-text pairs marked with visual prompts.
- 13K region-level instruction data generated from GPT-4V.
## Evaluation dataset
ViP-LLaVA achieves state-of-the-art performance in 4 academic region-level benchmarks and our newly proposed RegionBench.
|
shirsh10mall/First_LLM_Project
|
shirsh10mall
| 2023-12-17T23:40:28Z | 18 | 0 |
peft
|
[
"peft",
"pytorch",
"t5",
"arxiv:1910.09700",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"4-bit",
"region:us"
] | null | 2023-07-17T12:30:15Z |
---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0
|
Prezily/bert-yelp
|
Prezily
| 2023-12-17T23:31:55Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-17T23:31:16Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: bert-yelp
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-yelp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5026
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.5026 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_5x_deit_base_sgd_00001_fold2
|
hkivancoral
| 2023-12-17T23:30:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-17T10:35:23Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_base_sgd_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4459234608985025
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0641
- Accuracy: 0.4459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1035 | 1.0 | 375 | 1.1062 | 0.3344 |
| 1.1126 | 2.0 | 750 | 1.1043 | 0.3344 |
| 1.104 | 3.0 | 1125 | 1.1024 | 0.3344 |
| 1.1172 | 4.0 | 1500 | 1.1007 | 0.3428 |
| 1.1218 | 5.0 | 1875 | 1.0990 | 0.3494 |
| 1.103 | 6.0 | 2250 | 1.0973 | 0.3544 |
| 1.0899 | 7.0 | 2625 | 1.0957 | 0.3594 |
| 1.1072 | 8.0 | 3000 | 1.0942 | 0.3661 |
| 1.0922 | 9.0 | 3375 | 1.0926 | 0.3744 |
| 1.0843 | 10.0 | 3750 | 1.0912 | 0.3727 |
| 1.081 | 11.0 | 4125 | 1.0898 | 0.3710 |
| 1.0891 | 12.0 | 4500 | 1.0884 | 0.3760 |
| 1.0709 | 13.0 | 4875 | 1.0871 | 0.3777 |
| 1.0708 | 14.0 | 5250 | 1.0858 | 0.3827 |
| 1.0647 | 15.0 | 5625 | 1.0846 | 0.3827 |
| 1.0675 | 16.0 | 6000 | 1.0834 | 0.3877 |
| 1.0777 | 17.0 | 6375 | 1.0822 | 0.3927 |
| 1.1021 | 18.0 | 6750 | 1.0811 | 0.3943 |
| 1.075 | 19.0 | 7125 | 1.0800 | 0.3993 |
| 1.08 | 20.0 | 7500 | 1.0789 | 0.3977 |
| 1.0665 | 21.0 | 7875 | 1.0779 | 0.4010 |
| 1.0636 | 22.0 | 8250 | 1.0769 | 0.4010 |
| 1.0724 | 23.0 | 8625 | 1.0760 | 0.4043 |
| 1.075 | 24.0 | 9000 | 1.0751 | 0.4093 |
| 1.0668 | 25.0 | 9375 | 1.0742 | 0.4077 |
| 1.0648 | 26.0 | 9750 | 1.0734 | 0.4160 |
| 1.0792 | 27.0 | 10125 | 1.0726 | 0.4176 |
| 1.068 | 28.0 | 10500 | 1.0718 | 0.4160 |
| 1.0536 | 29.0 | 10875 | 1.0711 | 0.4160 |
| 1.0571 | 30.0 | 11250 | 1.0704 | 0.4193 |
| 1.055 | 31.0 | 11625 | 1.0698 | 0.4226 |
| 1.0604 | 32.0 | 12000 | 1.0691 | 0.4226 |
| 1.0502 | 33.0 | 12375 | 1.0686 | 0.4260 |
| 1.0518 | 34.0 | 12750 | 1.0680 | 0.4243 |
| 1.0472 | 35.0 | 13125 | 1.0675 | 0.4276 |
| 1.0642 | 36.0 | 13500 | 1.0670 | 0.4309 |
| 1.052 | 37.0 | 13875 | 1.0666 | 0.4309 |
| 1.0617 | 38.0 | 14250 | 1.0662 | 0.4309 |
| 1.0473 | 39.0 | 14625 | 1.0658 | 0.4359 |
| 1.0678 | 40.0 | 15000 | 1.0655 | 0.4393 |
| 1.0397 | 41.0 | 15375 | 1.0652 | 0.4393 |
| 1.0482 | 42.0 | 15750 | 1.0650 | 0.4393 |
| 1.0333 | 43.0 | 16125 | 1.0647 | 0.4393 |
| 1.0512 | 44.0 | 16500 | 1.0645 | 0.4409 |
| 1.0593 | 45.0 | 16875 | 1.0644 | 0.4409 |
| 1.0581 | 46.0 | 17250 | 1.0643 | 0.4409 |
| 1.043 | 47.0 | 17625 | 1.0642 | 0.4426 |
| 1.0454 | 48.0 | 18000 | 1.0641 | 0.4443 |
| 1.0474 | 49.0 | 18375 | 1.0641 | 0.4459 |
| 1.0427 | 50.0 | 18750 | 1.0641 | 0.4459 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
maxkretchmer/gc-mixtral
|
maxkretchmer
| 2023-12-17T23:25:46Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"region:us"
] | null | 2023-12-17T23:24:22Z |
---
library_name: peft
base_model: mistralai/Mixtral-8x7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
ndarocha/swin-tiny-patch4-window7-224-breastdensity
|
ndarocha
| 2023-12-17T23:20:52Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-17T13:18:45Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-breastdensity
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5236051502145923
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-breastdensity
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0571
- Accuracy: 0.5236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1872 | 0.99 | 49 | 1.2194 | 0.4320 |
| 1.0998 | 1.99 | 98 | 1.0917 | 0.4807 |
| 1.0623 | 2.98 | 147 | 1.0571 | 0.5236 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mike-krk/ppo-SnowballTarget
|
mike-krk
| 2023-12-17T23:11:59Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-12-17T23:02:56Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mike-krk/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nogamiNeuro/lab4
|
nogamiNeuro
| 2023-12-17T23:02:03Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-17T23:01:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 217.92 +/- 77.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
owanr/Sentiment-roberta-base-inter-frequency-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T22:50:23Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T22:50:05Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: Sentiment-roberta-base-inter-frequency-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-roberta-base-inter-frequency-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.488 | 1.0 | 5628 | 3.2770 |
| 3.675 | 2.0 | 11256 | 3.2770 |
| 3.479 | 3.0 | 16884 | 3.2770 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Ashwin-s-n/q-FrozenLake-v1-4x4-noSlippery
|
Ashwin-s-n
| 2023-12-17T22:17:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-17T22:17:35Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Ashwin-s-n/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hkivancoral/smids_5x_deit_base_rms_001_fold1
|
hkivancoral
| 2023-12-17T22:16:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-17T21:00:17Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_base_rms_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7863105175292153
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_rms_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6839
- Accuracy: 0.7863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1051 | 1.0 | 376 | 1.0840 | 0.3356 |
| 0.8654 | 2.0 | 752 | 0.8754 | 0.4841 |
| 0.7982 | 3.0 | 1128 | 0.7992 | 0.5843 |
| 0.8215 | 4.0 | 1504 | 0.8640 | 0.5509 |
| 0.8937 | 5.0 | 1880 | 0.7446 | 0.6678 |
| 0.7292 | 6.0 | 2256 | 0.7760 | 0.6361 |
| 0.6914 | 7.0 | 2632 | 0.7052 | 0.6694 |
| 0.6499 | 8.0 | 3008 | 0.7542 | 0.6511 |
| 0.6981 | 9.0 | 3384 | 0.6919 | 0.6912 |
| 0.6852 | 10.0 | 3760 | 0.6488 | 0.6995 |
| 0.5929 | 11.0 | 4136 | 0.6360 | 0.7162 |
| 0.6018 | 12.0 | 4512 | 0.6410 | 0.7212 |
| 0.578 | 13.0 | 4888 | 0.6824 | 0.7078 |
| 0.5646 | 14.0 | 5264 | 0.6123 | 0.7546 |
| 0.5813 | 15.0 | 5640 | 0.6611 | 0.7479 |
| 0.5334 | 16.0 | 6016 | 0.6911 | 0.7012 |
| 0.4401 | 17.0 | 6392 | 0.6234 | 0.7362 |
| 0.5629 | 18.0 | 6768 | 0.5782 | 0.7412 |
| 0.5062 | 19.0 | 7144 | 0.6504 | 0.7329 |
| 0.444 | 20.0 | 7520 | 0.5828 | 0.7696 |
| 0.4995 | 21.0 | 7896 | 0.5919 | 0.7446 |
| 0.4251 | 22.0 | 8272 | 0.6276 | 0.7629 |
| 0.4812 | 23.0 | 8648 | 0.6155 | 0.7462 |
| 0.4775 | 24.0 | 9024 | 0.6984 | 0.7179 |
| 0.4597 | 25.0 | 9400 | 0.6577 | 0.7295 |
| 0.4394 | 26.0 | 9776 | 0.5934 | 0.7429 |
| 0.4129 | 27.0 | 10152 | 0.6066 | 0.7563 |
| 0.4098 | 28.0 | 10528 | 0.5792 | 0.7579 |
| 0.4483 | 29.0 | 10904 | 0.5708 | 0.7613 |
| 0.3862 | 30.0 | 11280 | 0.5970 | 0.7679 |
| 0.4253 | 31.0 | 11656 | 0.6053 | 0.7546 |
| 0.4815 | 32.0 | 12032 | 0.5808 | 0.7479 |
| 0.3892 | 33.0 | 12408 | 0.5698 | 0.7613 |
| 0.35 | 34.0 | 12784 | 0.5670 | 0.7563 |
| 0.3952 | 35.0 | 13160 | 0.5921 | 0.7696 |
| 0.4191 | 36.0 | 13536 | 0.5999 | 0.7863 |
| 0.3174 | 37.0 | 13912 | 0.5845 | 0.7679 |
| 0.3864 | 38.0 | 14288 | 0.6529 | 0.7496 |
| 0.4036 | 39.0 | 14664 | 0.6327 | 0.7679 |
| 0.4274 | 40.0 | 15040 | 0.5923 | 0.7646 |
| 0.357 | 41.0 | 15416 | 0.6017 | 0.7863 |
| 0.348 | 42.0 | 15792 | 0.6309 | 0.7763 |
| 0.2967 | 43.0 | 16168 | 0.6418 | 0.7679 |
| 0.3292 | 44.0 | 16544 | 0.6405 | 0.7780 |
| 0.3428 | 45.0 | 16920 | 0.6600 | 0.7813 |
| 0.3127 | 46.0 | 17296 | 0.6429 | 0.7780 |
| 0.2979 | 47.0 | 17672 | 0.6618 | 0.7813 |
| 0.3209 | 48.0 | 18048 | 0.6803 | 0.7796 |
| 0.2866 | 49.0 | 18424 | 0.6856 | 0.7880 |
| 0.2611 | 50.0 | 18800 | 0.6839 | 0.7863 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/smids_5x_deit_base_sgd_00001_fold1
|
hkivancoral
| 2023-12-17T22:14:38Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-17T09:20:26Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_base_sgd_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5008347245409015
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0498
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1042 | 1.0 | 376 | 1.0929 | 0.3856 |
| 1.1057 | 2.0 | 752 | 1.0909 | 0.3923 |
| 1.1149 | 3.0 | 1128 | 1.0890 | 0.3940 |
| 1.1189 | 4.0 | 1504 | 1.0872 | 0.3907 |
| 1.1034 | 5.0 | 1880 | 1.0854 | 0.3973 |
| 1.0984 | 6.0 | 2256 | 1.0837 | 0.4023 |
| 1.1017 | 7.0 | 2632 | 1.0821 | 0.4073 |
| 1.0896 | 8.0 | 3008 | 1.0805 | 0.4157 |
| 1.0923 | 9.0 | 3384 | 1.0789 | 0.4240 |
| 1.0904 | 10.0 | 3760 | 1.0774 | 0.4257 |
| 1.0756 | 11.0 | 4136 | 1.0759 | 0.4324 |
| 1.0821 | 12.0 | 4512 | 1.0745 | 0.4357 |
| 1.0908 | 13.0 | 4888 | 1.0731 | 0.4424 |
| 1.0966 | 14.0 | 5264 | 1.0718 | 0.4441 |
| 1.0817 | 15.0 | 5640 | 1.0706 | 0.4441 |
| 1.0679 | 16.0 | 6016 | 1.0693 | 0.4457 |
| 1.0876 | 17.0 | 6392 | 1.0681 | 0.4457 |
| 1.064 | 18.0 | 6768 | 1.0670 | 0.4474 |
| 1.072 | 19.0 | 7144 | 1.0658 | 0.4474 |
| 1.09 | 20.0 | 7520 | 1.0648 | 0.4474 |
| 1.081 | 21.0 | 7896 | 1.0637 | 0.4508 |
| 1.0655 | 22.0 | 8272 | 1.0627 | 0.4558 |
| 1.0774 | 23.0 | 8648 | 1.0618 | 0.4574 |
| 1.0736 | 24.0 | 9024 | 1.0609 | 0.4608 |
| 1.0774 | 25.0 | 9400 | 1.0600 | 0.4691 |
| 1.055 | 26.0 | 9776 | 1.0591 | 0.4691 |
| 1.0689 | 27.0 | 10152 | 1.0583 | 0.4674 |
| 1.0612 | 28.0 | 10528 | 1.0576 | 0.4691 |
| 1.0701 | 29.0 | 10904 | 1.0568 | 0.4691 |
| 1.0631 | 30.0 | 11280 | 1.0561 | 0.4741 |
| 1.0623 | 31.0 | 11656 | 1.0555 | 0.4758 |
| 1.0571 | 32.0 | 12032 | 1.0549 | 0.4791 |
| 1.0769 | 33.0 | 12408 | 1.0543 | 0.4841 |
| 1.0511 | 34.0 | 12784 | 1.0537 | 0.4891 |
| 1.0652 | 35.0 | 13160 | 1.0532 | 0.4891 |
| 1.0631 | 36.0 | 13536 | 1.0527 | 0.4908 |
| 1.0446 | 37.0 | 13912 | 1.0523 | 0.4908 |
| 1.0591 | 38.0 | 14288 | 1.0519 | 0.4925 |
| 1.0589 | 39.0 | 14664 | 1.0516 | 0.4925 |
| 1.0552 | 40.0 | 15040 | 1.0512 | 0.4942 |
| 1.0353 | 41.0 | 15416 | 1.0509 | 0.4925 |
| 1.0348 | 42.0 | 15792 | 1.0507 | 0.4958 |
| 1.0561 | 43.0 | 16168 | 1.0505 | 0.4992 |
| 1.0679 | 44.0 | 16544 | 1.0503 | 0.4992 |
| 1.0611 | 45.0 | 16920 | 1.0501 | 0.5008 |
| 1.0413 | 46.0 | 17296 | 1.0500 | 0.5008 |
| 1.0517 | 47.0 | 17672 | 1.0499 | 0.5008 |
| 1.0644 | 48.0 | 18048 | 1.0499 | 0.5008 |
| 1.052 | 49.0 | 18424 | 1.0498 | 0.5008 |
| 1.0428 | 50.0 | 18800 | 1.0498 | 0.5008 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Osquery/1a5e2b8e
|
Osquery
| 2023-12-17T22:14:33Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:udpos28",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-16T23:40:30Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- udpos28
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 1a5e2b8e
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: udpos28
type: udpos28
config: te
split: validation
args: te
metrics:
- name: Precision
type: precision
value: 0.894336015358501
- name: Recall
type: recall
value: 0.8576779328683283
- name: F1
type: f1
value: 0.8680916339670367
- name: Accuracy
type: accuracy
value: 0.947129909365559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1a5e2b8e
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the udpos28 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3219
- Precision: 0.8943
- Recall: 0.8577
- F1: 0.8681
- Accuracy: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0423 | 7.58 | 1000 | 0.3219 | 0.8943 | 0.8577 | 0.8681 | 0.9471 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
owanr/ghc-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T22:10:18Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T22:09:50Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: ghc-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ghc-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.941 | 1.0 | 11020 | 0.9253 |
| 0.939 | 2.0 | 22040 | 0.9253 |
| 0.911 | 3.0 | 33060 | 0.9253 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
owanr/Sentiment-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T22:10:12Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T22:09:43Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: Sentiment-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.144 | 1.0 | 5628 | 3.0376 |
| 3.213 | 2.0 | 11256 | 3.0376 |
| 3.115 | 3.0 | 16884 | 3.0376 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
rizalmilyardi/IndobertTopicClassify01
|
rizalmilyardi
| 2023-12-17T22:08:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-17T22:03:20Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IndobertTopicClassify01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndobertTopicClassify01
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7277
- Accuracy: 0.8175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 200 | 1.2833 | 0.69 |
| No log | 2.0 | 400 | 0.8090 | 0.8 |
| 1.3814 | 3.0 | 600 | 0.7277 | 0.8175 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.13.3
|
katxtong/coqa_full
|
katxtong
| 2023-12-17T22:02:26Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"question-answering",
"generated_from_trainer",
"dataset:coqa",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-14T18:56:41Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- coqa
model-index:
- name: coqa_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# coqa_full
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the coqa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
quantumaikr/quantum-dpo-v0.1
|
quantumaikr
| 2023-12-17T21:55:57Z | 1,548 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T21:32:31Z |
---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# quantumaikr/quantum-dpo-v0.1
## Usage
Start chatting with `quantumaikr/quantum-dpo-v0.1` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/quantum-dpo-v0.1")
model = AutoModelForCausalLM.from_pretrained("quantumaikr/quantum-dpo-v0.1", torch_dtype=torch.float16, device_map="auto")
system_prompt = "You are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal."
message = "Write me a poem please"
prompt = f"[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{message}[/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.7, top_p=0.95, top_k=30, max_new_tokens=2048)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
QuantumLM should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of QuantumLM
```
## Use and Limitations
### Intended Use
These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Limitations and bias
Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
Contact us : [email protected]
|
owanr/Sentiment-roberta-base-inter-shuffle-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T21:49:21Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T21:49:04Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: Sentiment-roberta-base-inter-shuffle-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-roberta-base-inter-shuffle-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.286 | 1.0 | 5628 | 2.9077 |
| 3.321 | 2.0 | 11256 | 2.9077 |
| 3.117 | 3.0 | 16884 | 2.9077 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
gunkaynar/bert-base-multilingual-uncased-sentiment
|
gunkaynar
| 2023-12-17T21:39:50Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlptown/bert-base-multilingual-uncased-sentiment",
"base_model:finetune:nlptown/bert-base-multilingual-uncased-sentiment",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-11T16:34:54Z |
---
license: mit
base_model: nlptown/bert-base-multilingual-uncased-sentiment
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-multilingual-uncased-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-sentiment
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4877
- Accuracy: 0.7447
- F1: 0.7972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.1.1
- Datasets 2.14.7
- Tokenizers 0.11.0
|
gonxatroll/ppo-Pyramids
|
gonxatroll
| 2023-12-17T21:37:23Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-12-17T21:05:11Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gonxatroll/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_SystemError0.0_Seed103
|
behzadnet
| 2023-12-17T21:36:36Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-17T21:36:33Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_SystemError0.0_Seed103
|
behzadnet
| 2023-12-17T21:36:27Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-17T21:36:22Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
yishanz/mistral-7b-finetuned-datatalk
|
yishanz
| 2023-12-17T21:36:24Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"region:us"
] |
text-generation
| 2023-12-17T21:36:17Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
danlindb/a2c-PandaReachDense-v3
|
danlindb
| 2023-12-17T21:31:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-17T21:23:30Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
owanr/Sentiment-roberta-base-inter-shuffle-human_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T21:29:38Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T21:29:20Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: Sentiment-roberta-base-inter-shuffle-human_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-roberta-base-inter-shuffle-human_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.853 | 1.0 | 5628 | 2.7827 |
| 3.002 | 2.0 | 11256 | 2.7827 |
| 2.839 | 3.0 | 16884 | 2.7827 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
bartowski/dolphin-2.5-mixtral-8x7b-exl2
|
bartowski
| 2023-12-17T21:25:56Z | 5 | 4 | null |
[
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/dolphin-coder",
"dataset:migtissera/Synthia-v1.3",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:LDJnr/Pure-Dove",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-17T05:01:10Z |
---
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/dolphin-coder
- migtissera/Synthia-v1.3
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Pure-Dove
language:
- en
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of dolphin-2.5-mixtral-8x7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/ehartford/dolphin-2.5-mixtral-8x7b
<a href="https://huggingface.co/bartowski/dolphin-2.5-mixtral-8x7b-exl2/tree/3_0">3.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/dolphin-2.5-mixtral-8x7b-exl2/tree/3_5">3.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/dolphin-2.5-mixtral-8x7b-exl2/tree/3_75">3.75 bits per weight</a>
<a href="https://huggingface.co/bartowski/dolphin-2.5-mixtral-8x7b-exl2/tree/4_0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/dolphin-2.5-mixtral-8x7b-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/dolphin-2.5-mixtral-8x7b-exl2/tree/6_0">6.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/dolphin-2.5-mixtral-8x7b-exl2/tree/8_0">8.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/dolphin-2.5-mixtral-8x7b-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `dolphin-2.5-mixtral-8x7b-exl2`:
```shell
mkdir dolphin-2.5-mixtral-8x7b-exl2
huggingface-cli download bartowski/dolphin-2.5-mixtral-8x7b-exl2 --local-dir dolphin-2.5-mixtral-8x7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir dolphin-2.5-mixtral-8x7b-exl2
huggingface-cli download bartowski/dolphin-2.5-mixtral-8x7b-exl2 --revision 4_0 --local-dir dolphin-2.5-mixtral-8x7b-exl2 --local-dir-use-symlinks False
```
|
alitolga/deberta-v3-base-peft
|
alitolga
| 2023-12-17T21:19:05Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"region:us"
] | null | 2023-12-14T12:41:10Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-peft
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.6045 | 1.0 | 258 | 5.7917 |
| 4.3948 | 2.0 | 516 | 3.4037 |
| 3.771 | 3.0 | 774 | 2.8971 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
alitolga/deberta-base-peft
|
alitolga
| 2023-12-17T21:13:26Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/deberta-base",
"base_model:finetune:microsoft/deberta-base",
"license:mit",
"region:us"
] | null | 2023-12-14T12:17:28Z |
---
license: mit
base_model: microsoft/deberta-base
tags:
- generated_from_trainer
model-index:
- name: deberta-base-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-peft
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4191 | 1.0 | 389 | 1.2536 |
| 1.2007 | 2.0 | 778 | 0.6712 |
| 0.9788 | 3.0 | 1167 | 0.5691 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
dsuhcs/video-mae-ollie-kickflip-1
|
dsuhcs
| 2023-12-17T21:11:40Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-12-17T20:24:33Z |
---
license: mit
---
Simple Model for video classification of ollie and kickflip skateboard tricks
|
ccdv/lsg-legal-small-uncased-4096
|
ccdv
| 2023-12-17T21:11:13Z | 5,609 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"long context",
"legal",
"fill-mask",
"custom_code",
"en",
"arxiv:2210.15497",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- long context
- legal
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.36.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
* [Training global tokens](#training-global-tokens)
This model is a small version of the [LEGAL-BERT](https://huggingface.co/nlpaueb/legal-bert-small-uncased) model without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...).
Support encoder-decoder but I didnt test it extensively.\
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/legal-lsg-small-uncased-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 6 different sparse selection patterns. The best type is task dependent. \
If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \
Note that for sequences with length < 2*block_size, the type has no effect.
* `sparsity_type="bos_pooling"` (new)
* weighted average pooling using the BOS token
* Works best in general, especially with a rather large sparsity_factor (8, 16, 32)
* Additional parameters:
* None
* `sparsity_type="norm"`, select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="pooling"`, use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* `sparsity_type="stride"`, use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* `sparsity_type="block_stride"`, use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Fill mask example:
```python:
from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096")
SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."]
pipeline = FillMaskPipeline(model, tokenizer)
output = pipeline(SENTENCES, top_k=1)
output = [o[0]["sequence"] for o in output]
> ['Paris is the capital of France.', 'The goal of life is happiness.']
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-small-uncased-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096")
SENTENCE = "This is a test for sequence classification. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
## Training global tokens
To train global tokens and the classification head only:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-small-uncased-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
num_global_tokens=16
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096")
for name, param in model.named_parameters():
if "global_embeddings" not in name:
param.requires_grad = False
else:
param.required_grad = True
```
**LEGAL-BERT**
```
@inproceedings{chalkidis-etal-2020-legal,
title = "{LEGAL}-{BERT}: The Muppets straight out of Law School",
author = "Chalkidis, Ilias and
Fergadiotis, Manos and
Malakasiotis, Prodromos and
Aletras, Nikolaos and
Androutsopoulos, Ion",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
doi = "10.18653/v1/2020.findings-emnlp.261",
pages = "2898--2904"
}
```
|
ccdv/lsg-distilbert-base-uncased-4096
|
ccdv
| 2023-12-17T21:11:02Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"long context",
"custom_code",
"en",
"arxiv:2210.15497",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2022-03-08T15:40:18Z |
---
language: en
tags:
- distilbert
- long context
---
# LSG model
**Transformers >= 4.36.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
* [Training global tokens](#training-global-tokens)
This model is adapted from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer
This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...).
Support encoder-decoder and causal masking but I didnt test it extensively.\
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 6 different sparse selection patterns. The best type is task dependent. \
If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \
Note that for sequences with length < 2*block_size, the type has no effect.
* `sparsity_type="bos_pooling"` (new)
* weighted average pooling using the BOS token
* Works best in general, especially with a rather large sparsity_factor (8, 16, 32)
* Additional parameters:
* None
* `sparsity_type="norm"`, select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="pooling"`, use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* `sparsity_type="stride"`, use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* `sparsity_type="block_stride"`, use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Fill mask example:
```python:
from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096")
SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."]
pipeline = FillMaskPipeline(model, tokenizer)
output = pipeline(SENTENCES, top_k=1)
output = [o[0]["sequence"] for o in output]
> ['Paris is the capital of France.', 'The goal of life is happiness.']
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096")
SENTENCE = "This is a test for sequence classification. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
## Training global tokens
To train global tokens and the classification head only:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
num_global_tokens=16
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096")
for name, param in model.named_parameters():
if "global_embeddings" not in name:
param.requires_grad = False
else:
param.required_grad = True
```
|
ccdv/lsg-bart-base-16384
|
ccdv
| 2023-12-17T21:10:30Z | 21 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"long context",
"fill-mask",
"custom_code",
"en",
"arxiv:2210.15497",
"arxiv:1910.13461",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2022-06-28T14:44:38Z |
---
tags:
- summarization
- bart
- long context
language:
- en
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.36.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
This model is adapted from [BART-base](https://huggingface.co/facebook/bart-base) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-bart-base-16384", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-bart-base-16384",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 6 different sparse selection patterns. The best type is task dependent. \
If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \
Note that for sequences with length < 2*block_size, the type has no effect.
* `sparsity_type="bos_pooling"` (new)
* weighted average pooling using the BOS token
* Works best in general, especially with a rather large sparsity_factor (8, 16, 32)
* Additional parameters:
* None
* `sparsity_type="norm"`, select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="pooling"`, use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* `sparsity_type="stride"`, use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* `sparsity_type="block_stride"`, use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Seq2Seq example for summarization:
```python:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-16384",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
padding="max_length", # Optional but recommended
truncation=True # Optional but recommended
)
output = model(**token_ids)
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-bart-base-16384",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**BART**
```
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
ccdv/lsg-bart-base-4096-pubmed
|
ccdv
| 2023-12-17T21:10:22Z | 13 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"custom_code",
"en",
"dataset:scientific_papers",
"arxiv:2210.15497",
"autotrain_compatible",
"region:us"
] |
summarization
| 2022-05-09T16:20:01Z |
---
language:
- en
tags:
- summarization
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-4096-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**Transformers >= 4.36.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-pubmed", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-pubmed", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-4096-pubmed
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the [scientific_papers pubmed](https://huggingface.co/datasets/scientific_papers) dataset. \
It achieves the following results on the test set:
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 256 | 0 | 768 | 47.37 | 21.74 | 28.59 | 43.67 |
| 4096 | Local | 128 | 0 | 384 | 47.02 | 21.33 | 28.34 | 43.31 |
| 4096 | Pooling | 128 | 4 | 644 | 47.11 | 21.42 | 28.43 | 43.40 |
| 4096 | Stride | 128 | 4 | 644 | 47.16 | 21.49 | 28.38 | 43.44 |
| 4096 | Block Stride | 128 | 4 | 644 | 47.13 | 21.46 | 28.39 | 43.42 |
| 4096 | Norm | 128 | 4 | 644 | 47.09 | 21.44 | 28.40 | 43.36 |
| 4096 | LSH | 128 | 4 | 644 | 47.11 | 21.41 | 28.41 | 43.42 |
With smaller block size (lower ressources):
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 64 | 0 | 192 | 45.74 | 20.26 | 27.51 | 41.99 |
| 4096 | Local | 32 | 0 | 96 | 42.69 | 17.83 | 25.62 | 38.89 |
| 4096 | Pooling | 32 | 4 | 160 | 44.60 | 19.35 | 26.83 | 40.85 |
| 4096 | Stride | 32 | 4 | 160 | 45.52 | 20.07 | 27.39 | 41.75 |
| 4096 | Block Stride | 32 | 4 | 160 | 45.30 | 19.89 | 27.22 | 41.54 |
| 4096 | Norm | 32 | 4 | 160 | 44.30 | 19.05 | 26.57 | 40.47 |
| 4096 | LSH | 32 | 4 | 160 | 44.53 | 19.27 | 26.84 | 40.74 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: scientific_papers
- dataset_config_name: pubmed
- eval_batch_size: 8
- eval_samples: 6658
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 512
- min_length: 128
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
ccdv/lsg-bart-base-4096-multinews
|
ccdv
| 2023-12-17T21:10:18Z | 26 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"custom_code",
"en",
"dataset:multi_news",
"arxiv:2210.15497",
"autotrain_compatible",
"region:us"
] |
summarization
| 2022-05-25T11:09:23Z |
---
language:
- en
tags:
- summarization
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-4096-multinews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**Transformers >= 4.36.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-multinews", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-multinews", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-4096-multinews
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the [multi_news default](https://huggingface.co/datasets/multi_news) dataset. \
It achieves the following results on the test set:
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 256 | 0 | 768 | 47.10 | 18.94 | 25.22 | 43.13 |
| 4096 | Local | 128 | 0 | 384 | 46.73 | 18.79 | 25.13 | 42.76 |
| 4096 | Pooling | 128 | 4 | 644 | 46.83 | 18.87 | 25.23 | 42.86 |
| 4096 | Stride | 128 | 4 | 644 | 46.83 | 18.68 | 24.98 | 42.88 |
| 4096 | Block Stride | 128 | 4 | 644 | 46.83 | 18.72 | 25.06 | 42.88 |
| 4096 | Norm | 128 | 4 | 644 | 46.74 | 18.60 | 24.93 | 42.79 |
| 4096 | LSH | 128 | 4 | 644 | 46.74 | 18.82 | 25.19 | 42.77 |
With smaller block size (lower ressources):
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 64 | 0 | 192 | 45.61 | 17.91 | 24.54 | 41.65 |
| 4096 | Local | 32 | 0 | 96 | 43.50 | 16.36 | 23.45 | 39.61 |
| 4096 | Pooling | 32 | 4 | 160 | 44.77 | 17.31 | 24.16 | 40.86 |
| 4096 | Stride | 32 | 4 | 160 | 45.29 | 17.81 | 24.45 | 41.40 |
| 4096 | Block Stride | 32 | 4 | 160 | 45.39 | 17.86 | 24.51 | 41.43 |
| 4096 | Norm | 32 | 4 | 160 | 44.65 | 17.25 | 24.09 | 40.76 |
| 4096 | LSH | 32 | 4 | 160 | 44.44 | 17.20 | 24.00 | 40.57 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: multi_news
- dataset_config_name: default
- eval_batch_size: 8
- eval_samples: 5622
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 320
- min_length: 32
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
ccdv/lsg-bart-base-4096-arxiv
|
ccdv
| 2023-12-17T21:10:03Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"custom_code",
"en",
"dataset:scientific_papers",
"arxiv:2210.15497",
"autotrain_compatible",
"region:us"
] |
summarization
| 2022-05-09T15:53:09Z |
---
language:
- en
tags:
- summarization
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-4096-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**Transformers >= 4.36.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-arxiv", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-arxiv", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-4096-arxiv
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the [scientific_papers arxiv](https://huggingface.co/datasets/scientific_papers) dataset. \
It achieves the following results on the test set:
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 256 | 0 | 768 | 46.65 | 18.91 | 26.90 | 42.18 |
| 4096 | Local | 128 | 0 | 384 | 46.18 | 18.57 | 26.71 | 41.69 |
| 4096 | Pooling | 128 | 4 | 644 | 46.27 | 18.68 | 26.87 | 41.82 |
| 4096 | Stride | 128 | 4 | 644 | 46.34 | 18.64 | 26.69 | 41.87 |
| 4096 | Block Stride | 128 | 4 | 644 | 46.23 | 18.62 | 26.62 | 41.80 |
| 4096 | Norm | 128 | 4 | 644 | 45.96 | 18.46 | 26.52 | 41.51 |
| 4096 | LSH | 128 | 4 | 644 | 46.19 | 18.72 | 26.89 | 41.76 |
With smaller block size (lower ressources):
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 64 | 0 | 192 | 44.71 | 17.53 | 26.03 | 40.23 |
| 4096 | Local | 32 | 0 | 96 | 39.67 | 14.34 | 23.81 | 35.00 |
| 4096 | Pooling | 32 | 4 | 160 | 42.75 | 16.34 | 25.20 | 38.23 |
| 4096 | Stride | 32 | 4 | 160 | 44.23 | 17.21 | 25.71 | 39.72 |
| 4096 | Block Stride | 32 | 4 | 160 | 44.15 | 17.10 | 25.68 | 39.60 |
| 4096 | Norm | 32 | 4 | 160 | 42.02 | 15.65 | 24.56 | 37.45 |
| 4096 | LSH | 32 | 4 | 160 | 42.58 | 16.21 | 25.10 | 38.04 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: scientific_papers
- dataset_config_name: arxiv
- eval_batch_size: 8
- eval_samples: 6440
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 320
- min_length: 32
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
ccdv/lsg-bart-base-4096
|
ccdv
| 2023-12-17T21:10:01Z | 39 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"long context",
"fill-mask",
"custom_code",
"en",
"arxiv:2210.15497",
"arxiv:1910.13461",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- summarization
- bart
- long context
language:
- en
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.36.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
This model is adapted from [BART-base](https://huggingface.co/facebook/bart-base) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...).
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-bart-base-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 6 different sparse selection patterns. The best type is task dependent. \
If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \
Note that for sequences with length < 2*block_size, the type has no effect.
* `sparsity_type="bos_pooling"` (new)
* weighted average pooling using the BOS token
* Works best in general, especially with a rather large sparsity_factor (8, 16, 32)
* Additional parameters:
* None
* `sparsity_type="norm"`, select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="pooling"`, use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* `sparsity_type="stride"`, use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* `sparsity_type="block_stride"`, use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Seq2Seq example for summarization:
```python:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
padding="max_length", # Optional but recommended
truncation=True # Optional but recommended
)
output = model(**token_ids)
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-bart-base-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**BART**
```
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
owanr/SBIC-roberta-base-inter-frequency-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T21:07:55Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T21:07:38Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SBIC-roberta-base-inter-frequency-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SBIC-roberta-base-inter-frequency-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.202 | 1.0 | 12516 | 1.1768 |
| 1.228 | 2.0 | 25032 | 1.1768 |
| 1.209 | 3.0 | 37548 | 1.1768 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
rizalmilyardi/IndobertTypeNewsClassify02
|
rizalmilyardi
| 2023-12-17T20:51:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-17T20:35:10Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IndobertTypeNewsClassify02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndobertTypeNewsClassify02
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3068
- Accuracy: 0.9491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 192 | 0.2083 | 0.9295 |
| No log | 2.0 | 384 | 0.2298 | 0.9504 |
| 0.1682 | 3.0 | 576 | 0.2888 | 0.9452 |
| 0.1682 | 4.0 | 768 | 0.3007 | 0.9465 |
| 0.1682 | 5.0 | 960 | 0.2916 | 0.9517 |
| 0.0369 | 6.0 | 1152 | 0.3068 | 0.9491 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.13.3
|
owanr/Sentiment-roberta-base-inter-sorted-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T20:49:08Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T20:48:50Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: Sentiment-roberta-base-inter-sorted-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-roberta-base-inter-sorted-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.476 | 1.0 | 5628 | 2.3196 |
| 2.571 | 2.0 | 11256 | 2.3196 |
| 2.453 | 3.0 | 16884 | 2.3196 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Tirendaz/emotion-analysis-with-distilbert
|
Tirendaz
| 2023-12-17T20:41:50Z | 14 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-04T11:46:14Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Tirendaz/emotion-analysis-with-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Tirendaz/emotion-analysis-with-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1347
- Validation Loss: 0.1393
- Train Accuracy: 0.937
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3781 | 0.1645 | 0.927 | 0 |
| 0.1347 | 0.1393 | 0.937 | 1 |
### Framework versions
- Transformers 4.33.0
- TensorFlow 2.12.0
- Datasets 2.15.0
- Tokenizers 0.13.3
|
owanr/SBIC-roberta-base-inter-frequency-human_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T20:41:41Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T20:41:24Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SBIC-roberta-base-inter-frequency-human_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SBIC-roberta-base-inter-frequency-human_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.173 | 1.0 | 12516 | 2.1218 |
| 2.133 | 2.0 | 25032 | 2.1218 |
| 2.158 | 3.0 | 37548 | 2.1218 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Oyunbaatar/roberta-base-ner-demo
|
Oyunbaatar
| 2023-12-17T20:35:08Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"mn",
"base_model:bayartsogt/mongolian-roberta-base",
"base_model:finetune:bayartsogt/mongolian-roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-17T20:34:42Z |
---
language:
- mn
base_model: bayartsogt/mongolian-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-ner-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner-demo
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- Precision: 0.9297
- Recall: 0.9366
- F1: 0.9331
- Accuracy: 0.9801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1678 | 1.0 | 477 | 0.0929 | 0.8136 | 0.8806 | 0.8457 | 0.9679 |
| 0.0635 | 2.0 | 954 | 0.0894 | 0.8477 | 0.8933 | 0.8699 | 0.9708 |
| 0.0291 | 3.0 | 1431 | 0.0840 | 0.9262 | 0.9357 | 0.9309 | 0.9809 |
| 0.0163 | 4.0 | 1908 | 0.0928 | 0.9269 | 0.9357 | 0.9313 | 0.9805 |
| 0.0087 | 5.0 | 2385 | 0.1048 | 0.9259 | 0.9352 | 0.9305 | 0.9802 |
| 0.0059 | 6.0 | 2862 | 0.1179 | 0.9271 | 0.9339 | 0.9305 | 0.9794 |
| 0.0032 | 7.0 | 3339 | 0.1230 | 0.9278 | 0.9353 | 0.9316 | 0.9800 |
| 0.002 | 8.0 | 3816 | 0.1335 | 0.9285 | 0.9337 | 0.9311 | 0.9795 |
| 0.0016 | 9.0 | 4293 | 0.1341 | 0.9287 | 0.9358 | 0.9322 | 0.9799 |
| 0.0013 | 10.0 | 4770 | 0.1352 | 0.9297 | 0.9366 | 0.9331 | 0.9801 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
owanr/Sentiment-roberta-base-inter-sorted-human_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T20:28:13Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T19:44:16Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: Sentiment-roberta-base-inter-sorted-human_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-roberta-base-inter-sorted-human_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.859 | 1.0 | 5628 | 1.7642 |
| 1.9 | 2.0 | 11256 | 1.7642 |
| 1.791 | 3.0 | 16884 | 1.7642 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
espnet/ofuton_p_utagoe_db_svs_naive_rnn_dp
|
espnet
| 2023-12-17T20:24:09Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"singing-voice-synthesis",
"jp",
"dataset:ofuton_p_utagoe_db",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2023-12-17T20:23:55Z |
---
tags:
- espnet
- audio
- singing-voice-synthesis
language: jp
datasets:
- ofuton_p_utagoe_db
license: cc-by-4.0
---
## ESPnet2 SVS model
### `espnet/ofuton_p_utagoe_db_svs_naive_rnn_dp`
This model was trained by ftshijt using ofuton_p_utagoe_db recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 5c4d7cf7feba8461de2e1080bf82182f0efaef38
pip install -e .
cd egs2/ofuton_p_utagoe_db/svs1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/ofuton_p_utagoe_db_svs_naive_rnn_dp
```
## SVS config
<details><summary>expand</summary>
```
config: conf/tuning/train_naive_rnn_dp.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/svs_train_naive_rnn_dp_raw_phn_pyopenjtalk_jp
ngpu: 1
seed: 0
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 2
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_lora: false
save_lora_only: true
lora_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/svs_stats_raw_phn_pyopenjtalk_jp/train/text_shape.phn
- exp/svs_stats_raw_phn_pyopenjtalk_jp/train/singing_shape
valid_shape_file:
- exp/svs_stats_raw_phn_pyopenjtalk_jp/valid/text_shape.phn
- exp/svs_stats_raw_phn_pyopenjtalk_jp/valid/singing_shape
batch_type: sorted
valid_batch_type: null
fold_length:
- 150
- 240000
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
chunk_default_fs: null
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - dump/raw/tr_no_dev/wav.scp
- singing
- sound
- - dump/raw/tr_no_dev/label
- label
- duration
- - dump/raw/tr_no_dev/score.scp
- score
- score
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- singing
- sound
- - dump/raw/dev/label
- label
- duration
- - dump/raw/dev/score.scp
- score
- score
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- pau
- a
- o
- i
- u
- e
- k
- n
- r
- t
- m
- N
- s
- w
- y
- d
- g
- sh
- b
- ch
- cl
- ts
- p
- z
- h
- j
- f
- ry
- v
- ty
- by
- py
- ky
- dy
- my
- ny
- hy
- gy
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pyopenjtalk
fs: 24000
score_feats_extract: syllable_score_feats
score_feats_extract_conf:
fs: 24000
n_fft: 2048
win_length: 1200
hop_length: 300
feats_extract: fbank
feats_extract_conf:
n_fft: 2048
hop_length: 300
win_length: 1200
fs: 24000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/svs_stats_raw_phn_pyopenjtalk_jp/train/feats_stats.npz
svs: naive_rnn_dp
svs_conf:
midi_dim: 129
embed_dim: 512
duration_dim: 500
eprenet_conv_layers: 0
eprenet_conv_chans: 256
eprenet_conv_filts: 3
elayers: 3
eunits: 256
ebidirectional: true
midi_embed_integration_type: add
dlayers: 2
dunits: 256
dbidirectional: true
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
use_batch_norm: true
reduction_factor: 1
eprenet_dropout_rate: 0.2
edropout_rate: 0.1
ddropout_rate: 0.1
postnet_dropout_rate: 0.5
init_type: pytorch
use_masking: true
pitch_extract: dio
pitch_extract_conf:
use_token_averaged_f0: false
fs: 24000
n_fft: 2048
hop_length: 300
f0max: 800
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/svs_stats_raw_phn_pyopenjtalk_jp/train/pitch_stats.npz
ying_extract: null
ying_extract_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202310'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{shi22d_interspeech,
author={Jiatong Shi and Shuai Guo and Tao Qian and Tomoki Hayashi and Yuning Wu and Fangzheng Xu and Xuankai Chang and Huazhe Li and Peter Wu and Shinji Watanabe and Qin Jin},
title={{Muskits: an End-to-end Music Processing Toolkit for Singing Voice Synthesis}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={4277--4281},
doi={10.21437/Interspeech.2022-10039}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
EugeneEvstafev/Mistral-7B-v0.1-chess-01
|
EugeneEvstafev
| 2023-12-17T20:21:45Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-12-17T18:41:59Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
pEpOo/catastrophy5
|
pEpOo
| 2023-12-17T20:21:33Z | 7 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2023-12-17T20:20:54Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: A traumatised dog that was found buried up to its head in dirt in France is
now in safe hands. This is such a... http://t.co/AGQo1479xM
- text: 'Hibernating pbx irrespective of pitch fatality careerism pan: crbZFZ'
- text: Stuart Broad Takes Eight Before Joe Root Runs Riot Against Aussies
- text: Maj Muzzamil Pilot Offr of MI-17 crashed near Mansehra today. http://t.co/kL4R1ccWct
- text: '@AdriaSimon_: Hailstorm day 2.... #round2 #yyc #yycstorm http://t.co/FqQI8GVLQ4'
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/all-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/all-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8172066549912435
name: Accuracy
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>"Was '80s New #Wave a #Casualty of #AIDS?: Tweet And Since they\x89Ûªd grown up watching David\x89Û_ http://t.co/qBecjli7cx"</li><li>"@CharlesDagnall He's getting 50 here I think. Salt. Wounds. Rub. In."</li><li>'Navy sidelines 3 newest subs http://t.co/gpVZV0249Y'</li></ul> |
| 1 | <ul><li>'The Latest: More Homes Razed by Northern California Wildfire - ABC News http://t.co/bKsYymvIsg #GN'</li><li>'@Durban_Knight Rescuers are searching for hundreds of migrants in the Mediterranean after a boat carr... http://t.co/cWCVBuBs01 @Nosy_Be'</li><li>'NEMA Ekiti distributed relief materials to affected victims of Rain/Windstorm disaster at Ode-Ekiti in Gbonyin LGA.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8172 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("pEpOo/catastrophy5")
# Run inference
preds = model("Stuart Broad Takes Eight Before Joe Root Runs Riot Against Aussies")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 14.9796 | 54 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 1732 |
| 1 | 1313 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.3383 | - |
| 0.0066 | 50 | 0.352 | - |
| 0.0131 | 100 | 0.3529 | - |
| 0.0197 | 150 | 0.2286 | - |
| 0.0263 | 200 | 0.2654 | - |
| 0.0328 | 250 | 0.2892 | - |
| 0.0394 | 300 | 0.1808 | - |
| 0.0460 | 350 | 0.2056 | - |
| 0.0525 | 400 | 0.0863 | - |
| 0.0591 | 450 | 0.2034 | - |
| 0.0657 | 500 | 0.1339 | - |
| 0.0722 | 550 | 0.1022 | - |
| 0.0788 | 600 | 0.1083 | - |
| 0.0854 | 650 | 0.1035 | - |
| 0.0919 | 700 | 0.1201 | - |
| 0.0985 | 750 | 0.0626 | - |
| 0.1051 | 800 | 0.1257 | - |
| 0.1117 | 850 | 0.1543 | - |
| 0.1182 | 900 | 0.0367 | - |
| 0.1248 | 950 | 0.1749 | - |
| 0.1314 | 1000 | 0.0553 | - |
| 0.1379 | 1050 | 0.0836 | - |
| 0.1445 | 1100 | 0.0161 | - |
| 0.1511 | 1150 | 0.1149 | - |
| 0.1576 | 1200 | 0.1144 | - |
| 0.1642 | 1250 | 0.0028 | - |
| 0.1708 | 1300 | 0.0037 | - |
| 0.1773 | 1350 | 0.1769 | - |
| 0.1839 | 1400 | 0.0172 | - |
| 0.1905 | 1450 | 0.0397 | - |
| 0.1970 | 1500 | 0.0645 | - |
| 0.2036 | 1550 | 0.0659 | - |
| 0.2102 | 1600 | 0.0014 | - |
| 0.2167 | 1650 | 0.0016 | - |
| 0.2233 | 1700 | 0.0729 | - |
| 0.2299 | 1750 | 0.0072 | - |
| 0.2364 | 1800 | 0.0175 | - |
| 0.2430 | 1850 | 0.0278 | - |
| 0.2496 | 1900 | 0.0537 | - |
| 0.2561 | 1950 | 0.0038 | - |
| 0.2627 | 2000 | 0.087 | - |
| 0.2693 | 2050 | 0.0459 | - |
| 0.2758 | 2100 | 0.0169 | - |
| 0.2824 | 2150 | 0.0112 | - |
| 0.2890 | 2200 | 0.001 | - |
| 0.2955 | 2250 | 0.0204 | - |
| 0.3021 | 2300 | 0.0796 | - |
| 0.3087 | 2350 | 0.0592 | - |
| 0.3153 | 2400 | 0.0003 | - |
| 0.3218 | 2450 | 0.0033 | - |
| 0.3284 | 2500 | 0.0309 | - |
| 0.3350 | 2550 | 0.0065 | - |
| 0.3415 | 2600 | 0.002 | - |
| 0.3481 | 2650 | 0.0076 | - |
| 0.3547 | 2700 | 0.0008 | - |
| 0.3612 | 2750 | 0.0023 | - |
| 0.3678 | 2800 | 0.0028 | - |
| 0.3744 | 2850 | 0.0171 | - |
| 0.3809 | 2900 | 0.0011 | - |
| 0.3875 | 2950 | 0.0015 | - |
| 0.3941 | 3000 | 0.0468 | - |
| 0.4006 | 3050 | 0.0075 | - |
| 0.4072 | 3100 | 0.0009 | - |
| 0.4138 | 3150 | 0.0334 | - |
| 0.4203 | 3200 | 0.0002 | - |
| 0.4269 | 3250 | 0.0001 | - |
| 0.4335 | 3300 | 0.0002 | - |
| 0.4400 | 3350 | 0.0001 | - |
| 0.4466 | 3400 | 0.021 | - |
| 0.4532 | 3450 | 0.0043 | - |
| 0.4597 | 3500 | 0.0084 | - |
| 0.4663 | 3550 | 0.0009 | - |
| 0.4729 | 3600 | 0.0033 | - |
| 0.4794 | 3650 | 0.0035 | - |
| 0.4860 | 3700 | 0.0004 | - |
| 0.4926 | 3750 | 0.0297 | - |
| 0.4991 | 3800 | 0.0004 | - |
| 0.5057 | 3850 | 0.0011 | - |
| 0.5123 | 3900 | 0.0238 | - |
| 0.5188 | 3950 | 0.0248 | - |
| 0.5254 | 4000 | 0.0293 | - |
| 0.5320 | 4050 | 0.0365 | - |
| 0.5386 | 4100 | 0.0261 | - |
| 0.5451 | 4150 | 0.0469 | - |
| 0.5517 | 4200 | 0.0098 | - |
| 0.5583 | 4250 | 0.0002 | - |
| 0.5648 | 4300 | 0.0236 | - |
| 0.5714 | 4350 | 0.0001 | - |
| 0.5780 | 4400 | 0.0001 | - |
| 0.5845 | 4450 | 0.0001 | - |
| 0.5911 | 4500 | 0.0138 | - |
| 0.5977 | 4550 | 0.0116 | - |
| 0.6042 | 4600 | 0.0003 | - |
| 0.6108 | 4650 | 0.0003 | - |
| 0.6174 | 4700 | 0.0001 | - |
| 0.6239 | 4750 | 0.0 | - |
| 0.6305 | 4800 | 0.0246 | - |
| 0.6371 | 4850 | 0.0001 | - |
| 0.6436 | 4900 | 0.0543 | - |
| 0.6502 | 4950 | 0.0001 | - |
| 0.6568 | 5000 | 0.0093 | - |
| 0.6633 | 5050 | 0.0001 | - |
| 0.6699 | 5100 | 0.0 | - |
| 0.6765 | 5150 | 0.0002 | - |
| 0.6830 | 5200 | 0.0001 | - |
| 0.6896 | 5250 | 0.0372 | - |
| 0.6962 | 5300 | 0.0 | - |
| 0.7027 | 5350 | 0.0001 | - |
| 0.7093 | 5400 | 0.0001 | - |
| 0.7159 | 5450 | 0.0003 | - |
| 0.7224 | 5500 | 0.0004 | - |
| 0.7290 | 5550 | 0.0001 | - |
| 0.7356 | 5600 | 0.0 | - |
| 0.7422 | 5650 | 0.0 | - |
| 0.7487 | 5700 | 0.0001 | - |
| 0.7553 | 5750 | 0.0001 | - |
| 0.7619 | 5800 | 0.0 | - |
| 0.7684 | 5850 | 0.0 | - |
| 0.7750 | 5900 | 0.0 | - |
| 0.7816 | 5950 | 0.0 | - |
| 0.7881 | 6000 | 0.0 | - |
| 0.7947 | 6050 | 0.0 | - |
| 0.8013 | 6100 | 0.0 | - |
| 0.8078 | 6150 | 0.0001 | - |
| 0.8144 | 6200 | 0.0001 | - |
| 0.8210 | 6250 | 0.0 | - |
| 0.8275 | 6300 | 0.0 | - |
| 0.8341 | 6350 | 0.0 | - |
| 0.8407 | 6400 | 0.0002 | - |
| 0.8472 | 6450 | 0.0 | - |
| 0.8538 | 6500 | 0.0001 | - |
| 0.8604 | 6550 | 0.0 | - |
| 0.8669 | 6600 | 0.0001 | - |
| 0.8735 | 6650 | 0.0001 | - |
| 0.8801 | 6700 | 0.0 | - |
| 0.8866 | 6750 | 0.0 | - |
| 0.8932 | 6800 | 0.0373 | - |
| 0.8998 | 6850 | 0.0 | - |
| 0.9063 | 6900 | 0.0 | - |
| 0.9129 | 6950 | 0.0272 | - |
| 0.9195 | 7000 | 0.0 | - |
| 0.9260 | 7050 | 0.0 | - |
| 0.9326 | 7100 | 0.0001 | - |
| 0.9392 | 7150 | 0.0 | - |
| 0.9458 | 7200 | 0.0002 | - |
| 0.9523 | 7250 | 0.0001 | - |
| 0.9589 | 7300 | 0.0 | - |
| 0.9655 | 7350 | 0.0 | - |
| 0.9720 | 7400 | 0.0 | - |
| 0.9786 | 7450 | 0.0001 | - |
| 0.9852 | 7500 | 0.0 | - |
| 0.9917 | 7550 | 0.0 | - |
| 0.9983 | 7600 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Tylerswe/zinbo-llama2-7b
|
Tylerswe
| 2023-12-17T20:20:16Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2023-12-17T20:20:06Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
leeda36/matroskin_LoRA
|
leeda36
| 2023-12-17T20:20:09Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-17T15:01:50Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK cat
license: openrail++
---
# SDXL LoRA DreamBooth - leeda36/matroskin_LoRA
<Gallery />
## Model description
These are leeda36/matroskin_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK cat to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](leeda36/matroskin_LoRA/tree/main) them in the Files & versions tab.
|
dmanary-pronavigator/Mixtral-8x7B-instruct-exl2-3-0bpw
|
dmanary-pronavigator
| 2023-12-17T20:16:30Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-21T21:21:27Z |
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
inference: false
---
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
```python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]
```
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Limitations
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
Fo1zsyzrk/ppo-LunarLander-v2
|
Fo1zsyzrk
| 2023-12-17T20:15:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-17T20:15:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.49 +/- 18.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CarlBrendt/llama2-dialogsum-adapter
|
CarlBrendt
| 2023-12-17T20:14:22Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-17T19:53:01Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
bdsaglam/llama-2-7b-chat-hf-kg-cons-multi-1702827674
|
bdsaglam
| 2023-12-17T20:04:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-17T20:04:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
espnet/opencpop_xiaoice
|
espnet
| 2023-12-17T20:00:45Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"singing-voice-synthesis",
"zh",
"dataset:opencpop",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2023-12-17T20:00:23Z |
---
tags:
- espnet
- audio
- singing-voice-synthesis
language: zh
datasets:
- opencpop
license: cc-by-4.0
---
## ESPnet2 SVS model
### `espnet/opencpop_xiaoice`
This model was trained by ftshijt using opencpop recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 5c4d7cf7feba8461de2e1080bf82182f0efaef38
pip install -e .
cd egs2/opencpop/svs1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/opencpop_xiaoice
```
## SVS config
<details><summary>expand</summary>
```
config: conf/tuning/train_xiaoice.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/svs_train_xiaoice_raw_phn_None_zh
ngpu: 1
seed: 0
num_workers: 10
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_lora: false
save_lora_only: true
lora_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/svs_stats_raw_phn_None_zh/train/text_shape.phn
- exp/svs_stats_raw_phn_None_zh/train/singing_shape
valid_shape_file:
- exp/svs_stats_raw_phn_None_zh/valid/text_shape.phn
- exp/svs_stats_raw_phn_None_zh/valid/singing_shape
batch_type: sorted
valid_batch_type: null
fold_length:
- 150
- 240000
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
chunk_default_fs: null
train_data_path_and_name_and_type:
- - dump24k/raw/tr_no_dev/text
- text
- text
- - dump24k/raw/tr_no_dev/wav.scp
- singing
- sound
- - dump24k/raw/tr_no_dev/label
- label
- duration
- - dump24k/raw/tr_no_dev/score.scp
- score
- score
- - exp/svs_stats_raw_phn_None_zh/train/collect_feats/pitch.scp
- pitch
- npy
- - exp/svs_stats_raw_phn_None_zh/train/collect_feats/feats.scp
- feats
- npy
valid_data_path_and_name_and_type:
- - dump24k/raw/dev/text
- text
- text
- - dump24k/raw/dev/wav.scp
- singing
- sound
- - dump24k/raw/dev/label
- label
- duration
- - dump24k/raw/dev/score.scp
- score
- score
- - exp/svs_stats_raw_phn_None_zh/valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/svs_stats_raw_phn_None_zh/valid/collect_feats/feats.scp
- feats
- npy
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- SP
- i
- AP
- e
- y
- d
- w
- sh
- ai
- n
- x
- j
- ian
- u
- l
- h
- b
- o
- zh
- an
- ou
- m
- q
- z
- en
- g
- ing
- ei
- ao
- ang
- uo
- eng
- t
- a
- ong
- ui
- k
- f
- r
- iang
- ch
- v
- in
- iao
- ie
- iu
- c
- s
- van
- p
- ve
- uan
- uang
- ia
- ua
- uai
- un
- er
- vn
- iong
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
fs: 24000
score_feats_extract: syllable_score_feats
score_feats_extract_conf:
fs: 24000
n_fft: 2048
win_length: 1200
hop_length: 300
feats_extract: fbank
feats_extract_conf:
n_fft: 2048
hop_length: 300
win_length: 1200
fs: 24000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/svs_stats_raw_phn_None_zh/train/feats_stats.npz
svs: xiaoice
svs_conf:
midi_dim: 129
duration_dim: 512
adim: 384
aheads: 4
elayers: 6
eunits: 1536
dlayers: 6
dunits: 1536
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
postnet_dropout_rate: 0.5
use_batch_norm: true
reduction_factor: 1
init_type: pytorch
use_masking: true
loss_function: XiaoiceSing2
loss_type: L1
lambda_mel: 1
lambda_dur: 0.1
lambda_pitch: 0.01
lambda_vuv: 0.01
pitch_extract: dio
pitch_extract_conf:
use_token_averaged_f0: false
fs: 24000
n_fft: 2048
hop_length: 300
f0max: 800
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/svs_stats_raw_phn_None_zh/train/pitch_stats.npz
ying_extract: null
ying_extract_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202310'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{shi22d_interspeech,
author={Jiatong Shi and Shuai Guo and Tao Qian and Tomoki Hayashi and Yuning Wu and Fangzheng Xu and Xuankai Chang and Huazhe Li and Peter Wu and Shinji Watanabe and Qin Jin},
title={{Muskits: an End-to-end Music Processing Toolkit for Singing Voice Synthesis}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={4277--4281},
doi={10.21437/Interspeech.2022-10039}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CarlBrendt/Lama_Dialog
|
CarlBrendt
| 2023-12-17T19:57:29Z | 4 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-17T19:55:15Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
espnet/opencpop_visinger2
|
espnet
| 2023-12-17T19:56:00Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"singing-voice-synthesis",
"zh",
"dataset:opencpop",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2023-12-17T19:55:24Z |
---
tags:
- espnet
- audio
- singing-voice-synthesis
language: zh
datasets:
- opencpop
license: cc-by-4.0
---
## ESPnet2 SVS model
### `espnet/opencpop_visinger2`
This model was trained by ftshijt using opencpop recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 5c4d7cf7feba8461de2e1080bf82182f0efaef38
pip install -e .
cd egs2/opencpop/svs1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/opencpop_visinger2
```
## SVS config
<details><summary>expand</summary>
```
config: conf/tuning/transfer_visinger2.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/svs_visinger2_normal
ngpu: 1
seed: 777
num_workers: 0
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- total_count
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_lora: false
save_lora_only: true
lora_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/svs_stats_raw_phn_None_zh/train/text_shape.phn
- exp/svs_stats_raw_phn_None_zh/train/singing_shape
valid_shape_file:
- exp/svs_stats_raw_phn_None_zh/valid/text_shape.phn
- exp/svs_stats_raw_phn_None_zh/valid/singing_shape
batch_type: sorted
valid_batch_type: null
fold_length:
- 150
- 409600
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
chunk_default_fs: null
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - dump/raw/tr_no_dev/wav.scp
- singing
- sound
- - dump/raw/tr_no_dev/label
- label
- duration
- - dump/raw/tr_no_dev/score.scp
- score
- score
- - exp/svs_stats_raw_phn_None_zh/train/collect_feats/pitch.scp
- pitch
- npy
- - exp/svs_stats_raw_phn_None_zh/train/collect_feats/feats.scp
- feats
- npy
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- singing
- sound
- - dump/raw/dev/label
- label
- duration
- - dump/raw/dev/score.scp
- score
- score
- - exp/svs_stats_raw_phn_None_zh/valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/svs_stats_raw_phn_None_zh/valid/collect_feats/feats.scp
- feats
- npy
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adamw
optim_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.998
optim2: adamw
optim2_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.998
generator_first: true
token_list:
- <blank>
- <unk>
- SP
- i
- AP
- e
- y
- d
- w
- sh
- ai
- n
- x
- j
- ian
- u
- l
- h
- b
- o
- zh
- an
- ou
- m
- q
- z
- en
- g
- ing
- ei
- ao
- ang
- uo
- eng
- t
- a
- ong
- ui
- k
- f
- r
- iang
- ch
- v
- in
- iao
- ie
- iu
- c
- s
- van
- p
- ve
- uan
- uang
- ia
- ua
- uai
- un
- er
- vn
- iong
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
fs: 44100
score_feats_extract: syllable_score_feats
score_feats_extract_conf:
fs: 44100
n_fft: 2048
win_length: 2048
hop_length: 512
feats_extract: fbank
feats_extract_conf:
n_fft: 2048
hop_length: 512
win_length: 2048
fs: 44100
fmin: 0
fmax: 22050
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/svs_stats_raw_phn_None_zh/train/feats_stats.npz
svs: vits
svs_conf:
generator_type: visinger
vocoder_generator_type: hifigan
generator_params:
hidden_channels: 192
spks: -1
global_channels: 256
segment_size: 20
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: conv1d
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: rel_pos
text_encoder_self_attention_layer_type: rel_selfattn
text_encoder_activation_type: swish
text_encoder_normalize_before: true
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: true
use_conformer_conv_in_text_encoder: false
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales:
- 8
- 8
- 4
- 2
decoder_upsample_kernel_sizes:
- 16
- 16
- 8
- 4
decoder_resblock_kernel_sizes:
- 3
- 7
- 11
decoder_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
use_weight_norm_in_decoder: true
posterior_encoder_kernel_size: 3
posterior_encoder_layers: 8
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: true
flow_flows: -1
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: true
use_only_mean_in_flow: true
use_phoneme_predictor: false
vocabs: 63
aux_channels: 80
generator_type: visinger
vocoder_generator_type: hifigan
fs: 44100
hop_length: 512
win_length: 2048
n_fft: 2048
discriminator_type: visinger2
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 256
bias: true
downsample_scales:
- 4
- 4
- 4
- 4
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
multi_freq_disc_params:
hidden_channels:
- 256
- 256
- 256
- 256
- 256
domain: double
mel_scale: true
divisors:
- 32
- 16
- 8
- 4
- 2
- 1
- 1
strides:
- 1
- 2
- 1
- 2
- 1
- 2
- 1
sample_rate: 44100
hop_lengths:
- 110
- 220
- 330
- 441
- 551
- 661
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 44100
n_fft: 2048
hop_length: 512
win_length: 2048
window: hann
n_mels: 80
fmin: 0
fmax: 22050
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_dur: 0.1
lambda_pitch: 10.0
lambda_phoneme: 1.0
lambda_kl: 1.0
sampling_rate: 44100
cache_generator_outputs: true
pitch_extract: dio
pitch_extract_conf:
use_token_averaged_f0: false
use_log_f0: false
fs: 44100
n_fft: 2048
hop_length: 512
f0max: 800
f0min: 80
pitch_normalize: null
pitch_normalize_conf:
stats_file: exp/svs_stats_raw_phn_None_zh/train/pitch_stats.npz
ying_extract: null
ying_extract_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202310'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{shi22d_interspeech,
author={Jiatong Shi and Shuai Guo and Tao Qian and Tomoki Hayashi and Yuning Wu and Fangzheng Xu and Xuankai Chang and Huazhe Li and Peter Wu and Shinji Watanabe and Qin Jin},
title={{Muskits: an End-to-end Music Processing Toolkit for Singing Voice Synthesis}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={4277--4281},
doi={10.21437/Interspeech.2022-10039}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ai-aerospace/autotrain-ams_v0.1_100_TinyLlama-1.1B-Chat-v0.1
|
ai-aerospace
| 2023-12-17T19:55:02Z | 0 | 0 | null |
[
"safetensors",
"text-generation",
"dataset:ai-aerospace/ams_data_train_generic_v0.1_100",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.1",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v0.1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-11T03:23:27Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.1
inference: false
license: apache-2.0
model_name: TinyLlama-1.1B-Chat-v0.1
model_type: TinyLlama
pipeline_tag: text-generation
prompt_template: '###Human: {prompt}###Assistant:{response}'
datasets:
- ai-aerospace/ams_data_train_generic_v0.1_100
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
TheBloke/GreenNodeLM-7B-v4leo-GPTQ
|
TheBloke
| 2023-12-17T19:53:07Z | 24 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:GreenNode/GreenNodeLM-7B-v4leo",
"base_model:quantized:GreenNode/GreenNodeLM-7B-v4leo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-12-17T19:24:25Z |
---
base_model: GreenNode/GreenNodeLM-7B-v4leo
inference: false
license: apache-2.0
model_creator: GreenNode.ai
model_name: GreenNodeLM 7B V4Leo
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# GreenNodeLM 7B V4Leo - GPTQ
- Model creator: [GreenNode.ai](https://huggingface.co/GreenNode)
- Original model: [GreenNodeLM 7B V4Leo](https://huggingface.co/GreenNode/GreenNodeLM-7B-v4leo)
<!-- description start -->
# Description
This repo contains GPTQ model files for [GreenNode.ai's GreenNodeLM 7B V4Leo](https://huggingface.co/GreenNode/GreenNodeLM-7B-v4leo).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-GGUF)
* [GreenNode.ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/GreenNode/GreenNodeLM-7B-v4leo)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/GreenNodeLM-7B-v4leo-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/GreenNodeLM-7B-v4leo-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `GreenNodeLM-7B-v4leo-GPTQ`:
```shell
mkdir GreenNodeLM-7B-v4leo-GPTQ
huggingface-cli download TheBloke/GreenNodeLM-7B-v4leo-GPTQ --local-dir GreenNodeLM-7B-v4leo-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir GreenNodeLM-7B-v4leo-GPTQ
huggingface-cli download TheBloke/GreenNodeLM-7B-v4leo-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir GreenNodeLM-7B-v4leo-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir GreenNodeLM-7B-v4leo-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/GreenNodeLM-7B-v4leo-GPTQ --local-dir GreenNodeLM-7B-v4leo-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/GreenNodeLM-7B-v4leo-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/GreenNodeLM-7B-v4leo-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/GreenNodeLM-7B-v4leo-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `GreenNodeLM-7B-v4leo-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/GreenNodeLM-7B-v4leo-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/GreenNodeLM-7B-v4leo-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: GreenNode.ai's GreenNodeLM 7B V4Leo
# How to use
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
from peft import PeftModel
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "7"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto").eval()
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.config.pad_token_id = tokenizer.eos_token_id
prompts = [
"Explain QKV in Transformer.",
"Can coughing effectively stop a heart attack?",
"Who is the president of the United States?",
"A farmer has a rectangular field with a length of 150 meters and a width of 100 meters. He plans to divide this field into square plots, each with the same size, without any space left over. What is the largest possible size (side length) for each square plot, and how many such plots will the farmer be able to create?",
"A farmer has a certain number of chickens and rabbits in her farmyard. One day, she counts a total of 72 heads and 200 feet among them. How many chickens and how many rabbits are in the farmer's farmyard?",
"What items is it legal to carry for anyone in the US?",
"A man lives on the 10th floor of a building. Every day, he takes the elevator down to the ground floor to go to work. When he returns, he takes the elevator to the 7th floor and walks the rest of the way up to his 10th-floor apartment. However, on rainy days, he goes straight to the 10th floor. Why does he do this?",
"Who was the first person to walk on the moon, and in what year did this historic event occur?",
"The trophy doesn’t fit into the brown suitcase because it’s too large. What does 'it' refer to?",
"Which element makes up most of the air we breathe? (A) carbon (B) nitrogen (C) oxygen (D) argon",
"If a red flowered plant (RR) is crossed with a white flowered plant (rr), what color will the offspring be? (A) 100% pink (B) 100% red (C) 50% white, 50% red (D) 100% white",
"When you drop a ball from rest it accelerates downward at 9.8 m/s². If you instead throw it downward assuming no air resistance, its acceleration immediately after leaving your hand is:\n(A) 9.8 m/s²\n(B) more than 9.8 m/s²\n(C) less than 9.8 m/s²\n(D) Cannot say unless the speed of throw is given.",
"A snail is at the bottom of a 10-meter deep well. Every day, the snail climbs up 3 meters. However, at night, while the snail sleeps, it slides down 2 meters. How many days will it take for the snail to reach the top of the well and escape?",
"Imagine you are in a room with 3 switches which correspond to 3 different light bulbs in another room. You cannot see the bulbs from the first room. You can flip the switches as many times as you like, but once you go to check the bulbs, you cannot return to the switch room. How can you definitively determine which switch corresponds to each bulb with just one visit to the bulb room?",
"Translate from English to Vietnamese:\n\"Imagine you are in a room with 3 switches which correspond to 3 different light bulbs in another room. You cannot see the bulbs from the first room. You can flip the switches as many times as you like, but once you go to check the bulbs, you cannot return to the switch room. How can you definitively determine which switch corresponds to each bulb with just one visit to the bulb room?\""
]
system = """Below is an instruction that describes a task.
Write a response that appropriately completes the request."""
template_format = """{system}
### Instruction:
{prompt}
### Response:
"""
for prompt in prompts:
template = template_format.format(system=system, prompt=prompt)
input_ids = tokenizer([template], return_tensors="pt").to("cuda")
print(input_ids)
print(tokenizer.decode(input_ids["input_ids"][0]))
outputs = model.generate(
**input_ids,
max_new_tokens=1024,
do_sample=True,
repetition_penalty=1.1,
temperature=0.3,
top_k=10,
top_p=0.95,
)
response = tokenizer.decode(outputs[0])
print(response)
print('*'*20)
```
|
fatmhd1995/toxic_comment_model_ethos_ft
|
fatmhd1995
| 2023-12-17T19:49:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:ethos",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-16T19:23:04Z |
---
datasets:
- ethos
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
owanr/SBIC-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T19:42:51Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T19:42:33Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SBIC-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SBIC-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.267 | 1.0 | 12516 | 1.2944 |
| 1.282 | 2.0 | 25032 | 1.2944 |
| 1.308 | 3.0 | 37548 | 1.2944 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
masonanalytics/PEFT-Zephyr-7B-Alpha
|
masonanalytics
| 2023-12-17T19:35:01Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:adapter:HuggingFaceH4/zephyr-7b-alpha",
"region:us"
] | null | 2023-12-17T19:30:54Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-alpha
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
owanr/SChem5Labels-roberta-base-intra-data-frequency-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T19:24:42Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T19:24:21Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-roberta-base-intra-data-frequency-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-roberta-base-intra-data-frequency-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.235 | 1.0 | 3164 | 3.9116 |
| 4.162 | 2.0 | 6328 | 3.9116 |
| 4.457 | 3.0 | 9492 | 3.9116 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ
|
TheBloke
| 2023-12-17T19:23:31Z | 20 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"base_model:kyujinpy/PlatYi-34B-Llama-Q-v3",
"base_model:quantized:kyujinpy/PlatYi-34B-Llama-Q-v3",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-12-17T17:21:16Z |
---
base_model: kyujinpy/PlatYi-34B-Llama-Q-v3
datasets:
- garage-bAInd/Open-Platypus
inference: false
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
model_creator: KyujinHan
model_name: PlatYi 34B Llama Q V3
model_type: yi
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# PlatYi 34B Llama Q V3 - GPTQ
- Model creator: [KyujinHan](https://huggingface.co/kyujinpy)
- Original model: [PlatYi 34B Llama Q V3](https://huggingface.co/kyujinpy/PlatYi-34B-Llama-Q-v3)
<!-- description start -->
# Description
This repo contains GPTQ model files for [KyujinHan's PlatYi 34B Llama Q V3](https://huggingface.co/kyujinpy/PlatYi-34B-Llama-Q-v3).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GGUF)
* [KyujinHan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kyujinpy/PlatYi-34B-Llama-Q-v3)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 18.60 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 19.25 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 21.21 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 15.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 35.34 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 16.90 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 36.11 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `PlatYi-34B-Llama-Q-v3-GPTQ`:
```shell
mkdir PlatYi-34B-Llama-Q-v3-GPTQ
huggingface-cli download TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ --local-dir PlatYi-34B-Llama-Q-v3-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir PlatYi-34B-Llama-Q-v3-GPTQ
huggingface-cli download TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir PlatYi-34B-Llama-Q-v3-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir PlatYi-34B-Llama-Q-v3-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ --local-dir PlatYi-34B-Llama-Q-v3-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `PlatYi-34B-Llama-Q-v3-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: KyujinHan's PlatYi 34B Llama Q V3
# **PlatYi-34B-Llama-Q-v3**
<img src='./PlatYi.png' width=256>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
PlatYi-34B-Llama-Q-v3 is an auto-regressive language model based on the Yi-34B transformer architecture.
**Blog Link**
Blog: [Coming soon...]
Github: [Coming soon...]
**Base Model**
[chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama)
**Training Dataset**
[garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
## Fix some bugs
- Before model, there is some mistakes.
- I modified the templates and warmup_steps.
## Notice
While training, I used Q-LoRA.
The lora_r values is 64.
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PlatYi-34B-Llama-Q-v3 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| PlatYi-34B-Llama-Q-v2 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| PlatYi-34B-Llama-Q | 71.13 | 65.70 | 85.22 | 78.78 | 53.64 | 83.03 | 60.42 |
| PlatYi-34B-Llama | 68.37 | 67.83 | 85.35 | 78.26 | 53.46 | 82.87 | 42.46 |
| [Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) | 70.95 | 64.59 | 85.63 | 76.31 | 55.60 | 82.79 | 60.80 |
| [Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/PlatYi-34B-Llama-Q-v3"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
|
owanr/SChem5Labels-roberta-base-inter-data-frequency-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T19:12:48Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T19:12:32Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-roberta-base-inter-data-frequency-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-roberta-base-inter-data-frequency-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.478 | 1.0 | 3164 | 6.0808 |
| 6.494 | 2.0 | 6328 | 6.0808 |
| 6.403 | 3.0 | 9492 | 6.0808 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sinonimayzer/UzRoBERTa-v2
|
sinonimayzer
| 2023-12-17T19:06:41Z | 28 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"uz",
"dataset:sinonimayzer/mixed-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-06T08:24:07Z |
---
widget:
- text: Kuchli yomg‘irlar tufayli bir qator <mask> kuchli sel oqishi kuzatildi.
example_title: Example 1
- text: >-
Shu munosabat bilan O‘zbekiston Prezidenti global inqiroz sharoitida savdo-iqtisodiy hamkorlikni <mask> va hududlararo aloqalarni rivojlantirishning muhim masalalariga to‘xtalib o‘tdi.
example_title: Example 2
tags:
- generated_from_trainer
datasets:
- sinonimayzer/mixed-data
language:
- uz
library_name: transformers
pipeline_tag: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UzRoBERTa-v2
This model achieves the following results on the evaluation set:
- Loss: 1.9097
## How to use
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='sinonimayzer/UzRoBERTa-v2')
>>> unmasker("Kuchli yomg‘irlar tufayli bir qator <mask> kuchli sel oqishi kuzatildi.")
[{'score': 0.3318027853965759,
'token': 4877,
'token_str': ' hududlarda',
'sequence': 'Kuchli yomg‘irlar tufayli bir qator hududlarda kuchli sel oqishi kuzatildi.'},
{'score': 0.13175441324710846,
'token': 14470,
'token_str': ' viloyatlarda',
'sequence': 'Kuchli yomg‘irlar tufayli bir qator viloyatlarda kuchli sel oqishi kuzatildi.'},
{'score': 0.09735308587551117,
'token': 13555,
'token_str': ' tumanlarda',
'sequence': 'Kuchli yomg‘irlar tufayli bir qator tumanlarda kuchli sel oqishi kuzatildi.'},
{'score': 0.09112472087144852,
'token': 12261,
'token_str': ' shaharlarda',
'sequence': 'Kuchli yomg‘irlar tufayli bir qator shaharlarda kuchli sel oqishi kuzatildi.'},
{'score': 0.05940879508852959,
'token': 2767,
'token_str': ' joylarda',
'sequence': 'Kuchli yomg‘irlar tufayli bir qator joylarda kuchli sel oqishi kuzatildi.'}]
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 92
- eval_batch_size: 92
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.3673 | 0.25 | 100000 | 2.4588 |
| 2.0797 | 0.51 | 200000 | 2.1653 |
| 1.9369 | 0.76 | 300000 | 2.0265 |
| 1.8545 | 1.02 | 400000 | 1.9456 |
| 1.8133 | 1.27 | 500000 | 1.9101 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
owanr/ghc-roberta-base-intra-sorted-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T18:56:55Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T18:56:38Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: ghc-roberta-base-intra-sorted-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ghc-roberta-base-intra-sorted-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.927 | 1.0 | 11020 | 0.9253 |
| 0.927 | 2.0 | 22040 | 0.9253 |
| 0.902 | 3.0 | 33060 | 0.9253 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
xezno/phi-2-alpaca-lora
|
xezno
| 2023-12-17T18:54:59Z | 0 | 0 | null |
[
"safetensors",
"dataset:tatsu-lab/alpaca",
"license:other",
"region:us"
] | null | 2023-12-17T18:50:59Z |
---
license: other
datasets:
- tatsu-lab/alpaca
---
# Model Card for phi-2-alpaca
This is a low-rank adapter for [phi-2](https://huggingface.co/microsoft/phi-2) fit on the [alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset.
## Training Hyperparameters
The model was trained on 1xA100 GPU using PEFT-LORA.
The following hyperparameters were used during training:
- Lora target modules: Wqkv, out_proj
- Lora r: 16
- lora_alpha: 16
- lora_dropout: 0.1
- learning_rate: 5e-05
- per_device_train_batch_size: 1
- gradient_accumulation_steps: 1
- training_steps: 120000
## Limitations and Bias
The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
- Language: The model is designed to work with English text only and may not perform as well in other languages.
In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
|
bhuvana1/anime-sdxl
|
bhuvana1
| 2023-12-17T18:51:48Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-15T11:48:27Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: anime of rajinik with warm smile and cool style in hd quality
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Zabihin/Symptom_to_Diagnosis
|
Zabihin
| 2023-12-17T18:51:05Z | 147 | 10 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"medical",
"en",
"dataset:gretelai/symptom_to_diagnosis",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-16T21:06:40Z |
---
license: apache-2.0
base_model: bert-base-cased
datasets:
- gretelai/symptom_to_diagnosis
metrics:
- f1
tags:
- medical
widget:
- text: >-
I've been having a lot of pain in my neck and back. I've also been having
trouble with my balance and coordination. I've been coughing a lot and my
limbs feel weak.
- text: >-
I've been feeling really run down and weak. My throat is sore and I've been
coughing a lot. I've also been having chills and a fever.
model-index:
- name: Symptom_to_Diagnosis
results:
- task:
type: text-classification
dataset:
type: gretelai/symptom_to_diagnosis
name: gretelai/symptom_to_diagnosis
split: test
metrics:
- type: precision
value: 0.94
name: macro avg
- type: recall
value: 0.93
name: macro avg
- type: f1-score
value: 0.93
name: macro avg
language:
- en
---
# Symptom_to_Diagnosis
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased)
on this dataset (https://huggingface.co/datasets/gretelai/symptom_to_diagnosis).
## Model description
Model Description
This model is a fine-tuned version of the bert-base-cased architecture,
specifically designed for text classification tasks related to diagnosing diseases from symptoms.
The primary objective is to analyze natural language descriptions of symptoms and predict one of 22 corresponding diagnoses.
## Dataset Information
The model was trained on the Gretel/symptom_to_diagnosis dataset, which consists of 1,065 symptom descriptions in the English language,
each labeled with one of the 22 possible diagnoses. The dataset focuses on fine-grained single-domain diagnosis,
making it suitable for tasks that require detailed classification based on symptom descriptions.
Example
{
"output_text": "drug reaction",
"input_text": "I've been having headaches and migraines, and I can't sleep. My whole body shakes and twitches. Sometimes I feel lightheaded."
}
# Use a pipeline as a high-level helper
```
from transformers import pipeline
pipe = pipeline("text-classification", model="Zabihin/Symptom_to_Diagnosis")
Example:
result = pipe("I've been having headaches and migraines, and I can't sleep. My whole body shakes and twitches. Sometimes I feel lightheaded.")
result:
[{'label': 'drug reaction', 'score': 0.9489321112632751}]
```
or
```
from transformers import pipeline
# Load the model
classifier = pipeline("text-classification", model="Zabihin/Symptom_to_Diagnosis", tokenizer="Zabihin/Symptom_to_Diagnosis")
# Example input text
input_text = "I've been having headaches and migraines, and I can't sleep. My whole body shakes and twitches. Sometimes I feel lightheaded."
# Get the predicted label
result = classifier(input_text)
# Print the predicted label
predicted_label = result[0]['label']
print("Predicted Label:", predicted_label)
Predicted Label: drug reaction
```
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
HARSHAPALNATIUNH/Githubmodel
|
HARSHAPALNATIUNH
| 2023-12-17T18:50:42Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bloom",
"text-generation",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:finetune:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-16T20:32:22Z |
---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- generated_from_trainer
model-index:
- name: Githubmodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Githubmodel
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.0
|
owanr/SBIC-roberta-base-inter-shuffle-human_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T18:50:21Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T18:50:01Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SBIC-roberta-base-inter-shuffle-human_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SBIC-roberta-base-inter-shuffle-human_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.173 | 1.0 | 12516 | 2.1269 |
| 2.125 | 2.0 | 25032 | 2.1269 |
| 2.166 | 3.0 | 37548 | 2.1269 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
urbija/llama-fine-tuned-peft
|
urbija
| 2023-12-17T18:48:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-17T18:47:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
tarekziade/distilbert-reuters21578
|
tarekziade
| 2023-12-17T18:39:54Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"news_classification",
"multi_label",
"en",
"dataset:reuters21578",
"base_model:distilbert/distilbert-base-cased",
"base_model:quantized:distilbert/distilbert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-17T18:29:49Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
- news_classification
- multi_label
datasets:
- reuters21578
metrics:
- f1
- accuracy
model-index:
- name: distilbert-finetuned-reuters21578-multilabel
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: reuters21578
type: reuters21578
config: ModApte
split: test
args: ModApte
metrics:
- name: F1
type: f1
value: 0.8628858578607322
- name: Accuracy
type: accuracy
value: 0.8195625759416768
language:
- en
pipeline_tag: text-classification
widget:
- text: "JAPAN TO REVISE LONG-TERM ENERGY DEMAND DOWNWARDS The Ministry of International Trade and Industry (MITI) will revise its long-term energy supply/demand outlook by August to meet a forecast downtrend in Japanese energy demand, ministry officials said. MITI is expected to lower the projection for primary energy supplies in the year 2000 to 550 mln kilolitres (kl) from 600 mln, they said. The decision follows the emergence of structural changes in Japanese industry following the rise in the value of the yen and a decline in domestic electric power demand. MITI is planning to work out a revised energy supply/demand outlook through deliberations of committee meetings of the Agency of Natural Resources and Energy, the officials said. They said MITI will also review the breakdown of energy supply sources, including oil, nuclear, coal and natural gas. Nuclear energy provided the bulk of Japan's electric power in the fiscal year ended March 31, supplying an estimated 27 pct on a kilowatt/hour basis, followed by oil (23 pct) and liquefied natural gas (21 pct), they noted. REUTER"
example_title: "Example-1"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Origin of this model
This model was forked from https://huggingface.co/lxyuan/distilbert-finetuned-reuters21578-multilabel -- I just generated the onnx versions in /onnx
## Motivation
Fine-tuning on the Reuters-21578 multilabel dataset is a valuable exercise, especially as it's frequently used in take-home tests during interviews. The dataset's complexity is just right for testing multilabel classification skills within a limited timeframe, while its real-world relevance helps simulate practical challenges. Experimenting with this dataset not only helps candidates prepare for interviews but also hones various skills including preprocessing, feature extraction, and model evaluation.
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the reuters21578 dataset.
## Inference Example
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="lxyuan/distilbert-finetuned-reuters21578-multilabel", return_all_scores=True)
# dataset["test"]["text"][2]
news_article = (
"JAPAN TO REVISE LONG-TERM ENERGY DEMAND DOWNWARDS The Ministry of International Trade and "
"Industry (MITI) will revise its long-term energy supply/demand "
"outlook by August to meet a forecast downtrend in Japanese "
"energy demand, ministry officials said. "
"MITI is expected to lower the projection for primary energy "
"supplies in the year 2000 to 550 mln kilolitres (kl) from 600 "
"mln, they said. "
"The decision follows the emergence of structural changes in "
"Japanese industry following the rise in the value of the yen "
"and a decline in domestic electric power demand. "
"MITI is planning to work out a revised energy supply/demand "
"outlook through deliberations of committee meetings of the "
"Agency of Natural Resources and Energy, the officials said. "
"They said MITI will also review the breakdown of energy "
"supply sources, including oil, nuclear, coal and natural gas. "
"Nuclear energy provided the bulk of Japan's electric power "
"in the fiscal year ended March 31, supplying an estimated 27 "
"pct on a kilowatt/hour basis, followed by oil (23 pct) and "
"liquefied natural gas (21 pct), they noted. "
"REUTER"
)
# dataset["test"]["topics"][2]
target_topics = ['crude', 'nat-gas']
fn_kwargs={"padding": "max_length", "truncation": True, "max_length": 512}
output = pipe(example, function_to_apply="sigmoid", **fn_kwargs)
for item in output[0]:
if item["score"]>=0.5:
print(item["label"], item["score"])
>>> crude 0.7355073690414429
nat-gas 0.8600426316261292
```
## Overall Summary and Comparison Table
| Metric | Baseline (Scikit-learn) | Transformer Model |
| ------------------- | ----------------------- | ----------------- |
| Micro-Averaged F1 | 0.77 | 0.86 |
| Macro-Averaged F1 | 0.29 | 0.33 |
| Weighted Average F1 | 0.70 | 0.84 |
| Samples Average F1 | 0.75 | 0.80 |
**Precision vs Recall**: Both models prioritize high precision over recall. In our client-facing news classification model, precision takes precedence over recall. This is because the repercussions of false positives are more severe and harder to justify to clients compared to false negatives. When the model incorrectly tags a news item with a topic, it's challenging to explain this error. On the other hand, if the model misses a topic, it's easier to defend by stating that the topic wasn't sufficiently emphasized in the news article.
**Class Imbalance Handling**: Both models suffer from the same general issue of not performing well on minority classes, as reflected in the low macro-averaged F1-scores. However, the transformer model shows a slight improvement, albeit marginal, in macro-averaged F1-score (0.33 vs 0.29).
**Issue of Zero Support Labels**: Both models have the problem of zero support for several labels, meaning these labels did not appear in the test set. This lack of "support" can significantly skew the performance metrics and may suggest that either the models are not well-tuned to predict these minority classes, or the dataset itself lacks sufficient examples of these classes. Given that both models struggle with low macro-averaged F1 scores, this issue further emphasizes the need for improved minority class handling in the models.
**General Performance**: The transformer model surpasses the scikit-learn baseline in terms of weighted and samples average F1-scores, indicating better overall performance and better handling of label imbalance.
**Conclusion**: While both models exhibit high precision, which is a business requirement, the transformer model slightly outperforms the scikit-learn baseline model in all metrics considered. It provides a better trade-off between precision and recall, as well as some improvement, albeit small, in handling minority classes. Thus, despite sharing similar weaknesses with the baseline, the transformer model demonstrates incremental improvements that could be significant in a production setting.
## Training and evaluation data
We remove single appearance label from both training and test sets using the following code:
```python
# Find Single Appearance Labels
def find_single_appearance_labels(y):
"""Find labels that appear only once in the dataset."""
all_labels = list(chain.from_iterable(y))
label_count = Counter(all_labels)
single_appearance_labels = [label for label, count in label_count.items() if count == 1]
return single_appearance_labels
# Remove Single Appearance Labels from Dataset
def remove_single_appearance_labels(dataset, single_appearance_labels):
"""Remove samples with single-appearance labels from both train and test sets."""
for split in ['train', 'test']:
dataset[split] = dataset[split].filter(lambda x: all(label not in single_appearance_labels for label in x['topics']))
return dataset
dataset = load_dataset("reuters21578", "ModApte")
# Find and Remove Single Appearance Labels
y_train = [item['topics'] for item in dataset['train']]
single_appearance_labels = find_single_appearance_labels(y_train)
print(f"Single appearance labels: {single_appearance_labels}")
>>> Single appearance labels: ['lin-oil', 'rye', 'red-bean', 'groundnut-oil', 'citruspulp', 'rape-meal', 'corn-oil', 'peseta', 'cotton-oil', 'ringgit', 'castorseed', 'castor-oil', 'lit', 'rupiah', 'skr', 'nkr', 'dkr', 'sun-meal', 'lin-meal', 'cruzado']
print("Removing samples with single-appearance labels...")
dataset = remove_single_appearance_labels(dataset, single_appearance_labels)
unique_labels = set(chain.from_iterable(dataset['train']["topics"]))
print(f"We have {len(unique_labels)} unique labels:\n{unique_labels}")
>>> We have 95 unique labels:
{'veg-oil', 'gold', 'platinum', 'ipi', 'acq', 'carcass', 'wool', 'coconut-oil', 'linseed', 'copper', 'soy-meal', 'jet', 'dlr', 'copra-cake', 'hog', 'rand', 'strategic-metal', 'can', 'tea', 'sorghum', 'livestock', 'barley', 'lumber', 'earn', 'wheat', 'trade', 'soy-oil', 'cocoa', 'inventories', 'income', 'rubber', 'tin', 'iron-steel', 'ship', 'rapeseed', 'wpi', 'sun-oil', 'pet-chem', 'palmkernel', 'nat-gas', 'gnp', 'l-cattle', 'propane', 'rice', 'lead', 'alum', 'instal-debt', 'saudriyal', 'cpu', 'jobs', 'meal-feed', 'oilseed', 'dmk', 'plywood', 'zinc', 'retail', 'dfl', 'cpi', 'crude', 'pork-belly', 'gas', 'money-fx', 'corn', 'tapioca', 'palladium', 'lei', 'cornglutenfeed', 'sunseed', 'potato', 'silver', 'sugar', 'grain', 'groundnut', 'naphtha', 'orange', 'soybean', 'coconut', 'stg', 'cotton', 'yen', 'rape-oil', 'palm-oil', 'oat', 'reserves', 'housing', 'interest', 'coffee', 'fuel', 'austdlr', 'money-supply', 'heat', 'fishmeal', 'bop', 'nickel', 'nzdlr'}
```
## Training procedure
[EDA on Reuters-21578 dataset](https://github.com/LxYuan0420/nlp/blob/main/notebooks/eda_reuters.ipynb):
This notebook provides an Exploratory Data Analysis (EDA) of the Reuters-21578 dataset. It includes visualizations and statistical summaries that offer insights into the dataset's structure, label distribution, and text characteristics.
[Reuters Baseline Scikit-Learn Model](https://github.com/LxYuan0420/nlp/blob/main/notebooks/scikit_learn_reuters.ipynb):
This notebook establishes a baseline model for text classification on the Reuters-21578 dataset using scikit-learn. It guides you through data preprocessing, feature extraction, model training, and evaluation.
[Reuters Transformer Model](https://github.com/LxYuan0420/nlp/blob/main/notebooks/transformer_reuters.ipynb):
This notebook delves into advanced text classification using a Transformer model on the Reuters-21578 dataset. It covers the implementation details, training process, and performance metrics of using Transformer-based models for this specific task.
[Multilabel Stratified Sampling & Hypyerparameter Search on Reuters Dataset](https://github.com/LxYuan0420/nlp/blob/main/notebooks/transformer_reuters_hyperparameter_tuning.ipynb):
In this notebook, we explore advanced machine learning techniques through the lens of the Hugging Face Trainer API, specifically targeting Multilabel Iterative Stratified Splitting and Hyperparameter Search. The former aims to fairly distribute imbalanced datasets across multiple labels in k-fold cross-validation, maintaining a distribution closely resembling that of the complete dataset. The latter walks users through a structured hyperparameter search to fine-tune model performance for optimal results.
## Evaluation results
<details>
<summary>Transformer Model Evaluation Result</summary>
Classification Report:
precision recall f1-score support
acq 0.97 0.93 0.95 719
alum 1.00 0.70 0.82 23
austdlr 0.00 0.00 0.00 0
barley 1.00 0.50 0.67 12
bop 0.79 0.50 0.61 30
can 0.00 0.00 0.00 0
carcass 0.67 0.67 0.67 18
cocoa 1.00 1.00 1.00 18
coconut 0.00 0.00 0.00 2
coconut-oil 0.00 0.00 0.00 2
coffee 0.86 0.89 0.87 27
copper 1.00 0.78 0.88 18
copra-cake 0.00 0.00 0.00 1
corn 0.84 0.87 0.86 55
cornglutenfeed 0.00 0.00 0.00 0
cotton 0.92 0.67 0.77 18
cpi 0.86 0.43 0.57 28
cpu 0.00 0.00 0.00 1
crude 0.87 0.93 0.90 189
dfl 0.00 0.00 0.00 1
dlr 0.72 0.64 0.67 44
dmk 0.00 0.00 0.00 4
earn 0.98 0.99 0.98 1087
fishmeal 0.00 0.00 0.00 0
fuel 0.00 0.00 0.00 10
gas 0.80 0.71 0.75 17
gnp 0.79 0.66 0.72 35
gold 0.95 0.67 0.78 30
grain 0.94 0.92 0.93 146
groundnut 0.00 0.00 0.00 4
heat 0.00 0.00 0.00 5
hog 1.00 0.33 0.50 6
housing 0.00 0.00 0.00 4
income 0.00 0.00 0.00 7
instal-debt 0.00 0.00 0.00 1
interest 0.89 0.67 0.77 131
inventories 0.00 0.00 0.00 0
ipi 1.00 0.58 0.74 12
iron-steel 0.90 0.64 0.75 14
jet 0.00 0.00 0.00 1
jobs 0.92 0.57 0.71 21
l-cattle 0.00 0.00 0.00 2
lead 0.00 0.00 0.00 14
lei 0.00 0.00 0.00 3
linseed 0.00 0.00 0.00 0
livestock 0.63 0.79 0.70 24
lumber 0.00 0.00 0.00 6
meal-feed 0.00 0.00 0.00 17
money-fx 0.78 0.81 0.80 177
money-supply 0.80 0.71 0.75 34
naphtha 0.00 0.00 0.00 4
nat-gas 0.82 0.60 0.69 30
nickel 0.00 0.00 0.00 1
nzdlr 0.00 0.00 0.00 2
oat 0.00 0.00 0.00 4
oilseed 0.64 0.61 0.63 44
orange 1.00 0.36 0.53 11
palladium 0.00 0.00 0.00 1
palm-oil 1.00 0.56 0.71 9
palmkernel 0.00 0.00 0.00 1
pet-chem 0.00 0.00 0.00 12
platinum 0.00 0.00 0.00 7
plywood 0.00 0.00 0.00 0
pork-belly 0.00 0.00 0.00 0
potato 0.00 0.00 0.00 3
propane 0.00 0.00 0.00 3
rand 0.00 0.00 0.00 1
rape-oil 0.00 0.00 0.00 1
rapeseed 0.00 0.00 0.00 8
reserves 0.83 0.56 0.67 18
retail 0.00 0.00 0.00 2
rice 1.00 0.57 0.72 23
rubber 0.82 0.75 0.78 12
saudriyal 0.00 0.00 0.00 0
ship 0.95 0.81 0.87 89
silver 1.00 0.12 0.22 8
sorghum 1.00 0.12 0.22 8
soy-meal 0.00 0.00 0.00 12
soy-oil 0.00 0.00 0.00 8
soybean 0.72 0.56 0.63 32
stg 0.00 0.00 0.00 0
strategic-metal 0.00 0.00 0.00 11
sugar 1.00 0.80 0.89 35
sun-oil 0.00 0.00 0.00 0
sunseed 0.00 0.00 0.00 5
tapioca 0.00 0.00 0.00 0
tea 0.00 0.00 0.00 3
tin 1.00 0.42 0.59 12
trade 0.78 0.79 0.79 116
veg-oil 0.91 0.59 0.71 34
wheat 0.83 0.83 0.83 69
wool 0.00 0.00 0.00 0
wpi 0.00 0.00 0.00 10
yen 0.57 0.29 0.38 14
zinc 1.00 0.69 0.82 13
micro avg 0.92 0.81 0.86 3694
macro avg 0.41 0.30 0.33 3694
weighted avg 0.87 0.81 0.84 3694
samples avg 0.81 0.80 0.80 3694
</details>
<details>
<summary>Scikit-learn Baseline Model Evaluation Result</summary>
Classification Report:
precision recall f1-score support
acq 0.98 0.87 0.92 719
alum 1.00 0.00 0.00 23
austdlr 1.00 1.00 1.00 0
barley 1.00 0.00 0.00 12
bop 1.00 0.30 0.46 30
can 1.00 1.00 1.00 0
carcass 1.00 0.06 0.11 18
cocoa 1.00 0.61 0.76 18
coconut 1.00 0.00 0.00 2
coconut-oil 1.00 0.00 0.00 2
coffee 0.94 0.59 0.73 27
copper 1.00 0.22 0.36 18
copra-cake 1.00 0.00 0.00 1
corn 0.97 0.51 0.67 55
cornglutenfeed 1.00 1.00 1.00 0
cotton 1.00 0.06 0.11 18
cpi 1.00 0.14 0.25 28
cpu 1.00 0.00 0.00 1
crude 0.94 0.69 0.80 189
dfl 1.00 0.00 0.00 1
dlr 0.86 0.43 0.58 44
dmk 1.00 0.00 0.00 4
earn 0.99 0.97 0.98 1087
fishmeal 1.00 1.00 1.00 0
fuel 1.00 0.00 0.00 10
gas 1.00 0.00 0.00 17
gnp 1.00 0.31 0.48 35
gold 0.83 0.17 0.28 30
grain 1.00 0.65 0.79 146
groundnut 1.00 0.00 0.00 4
heat 1.00 0.00 0.00 5
hog 1.00 0.00 0.00 6
housing 1.00 0.00 0.00 4
income 1.00 0.00 0.00 7
instal-debt 1.00 0.00 0.00 1
interest 0.88 0.40 0.55 131
inventories 1.00 1.00 1.00 0
ipi 1.00 0.00 0.00 12
iron-steel 1.00 0.00 0.00 14
jet 1.00 0.00 0.00 1
jobs 1.00 0.14 0.25 21
l-cattle 1.00 0.00 0.00 2
lead 1.00 0.00 0.00 14
lei 1.00 0.00 0.00 3
linseed 1.00 1.00 1.00 0
livestock 0.67 0.08 0.15 24
lumber 1.00 0.00 0.00 6
meal-feed 1.00 0.00 0.00 17
money-fx 0.80 0.50 0.62 177
money-supply 0.88 0.41 0.56 34
naphtha 1.00 0.00 0.00 4
nat-gas 1.00 0.27 0.42 30
nickel 1.00 0.00 0.00 1
nzdlr 1.00 0.00 0.00 2
oat 1.00 0.00 0.00 4
oilseed 0.62 0.11 0.19 44
orange 1.00 0.00 0.00 11
palladium 1.00 0.00 0.00 1
palm-oil 1.00 0.22 0.36 9
palmkernel 1.00 0.00 0.00 1
pet-chem 1.00 0.00 0.00 12
platinum 1.00 0.00 0.00 7
plywood 1.00 1.00 1.00 0
pork-belly 1.00 1.00 1.00 0
potato 1.00 0.00 0.00 3
propane 1.00 0.00 0.00 3
rand 1.00 0.00 0.00 1
rape-oil 1.00 0.00 0.00 1
rapeseed 1.00 0.00 0.00 8
reserves 1.00 0.00 0.00 18
retail 1.00 0.00 0.00 2
rice 1.00 0.00 0.00 23
rubber 1.00 0.17 0.29 12
saudriyal 1.00 1.00 1.00 0
ship 0.92 0.26 0.40 89
silver 1.00 0.00 0.00 8
sorghum 1.00 0.00 0.00 8
soy-meal 1.00 0.00 0.00 12
soy-oil 1.00 0.00 0.00 8
soybean 1.00 0.16 0.27 32
stg 1.00 1.00 1.00 0
strategic-metal 1.00 0.00 0.00 11
sugar 1.00 0.60 0.75 35
sun-oil 1.00 1.00 1.00 0
sunseed 1.00 0.00 0.00 5
tapioca 1.00 1.00 1.00 0
tea 1.00 0.00 0.00 3
tin 1.00 0.00 0.00 12
trade 0.92 0.61 0.74 116
veg-oil 1.00 0.12 0.21 34
wheat 0.97 0.55 0.70 69
wool 1.00 1.00 1.00 0
wpi 1.00 0.00 0.00 10
yen 1.00 0.00 0.00 14
zinc 1.00 0.00 0.00 13
micro avg 0.97 0.64 0.77 3694
macro avg 0.98 0.25 0.29 3694
weighted avg 0.96 0.64 0.70 3694
samples avg 0.98 0.74 0.75 3694
</details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
| :-----------: | :---: | :--: | :-------------: | :----: | :-----: | :------: |
| 0.1801 | 1.0 | 300 | 0.0439 | 0.3896 | 0.6210 | 0.3566 |
| 0.0345 | 2.0 | 600 | 0.0287 | 0.6289 | 0.7318 | 0.5954 |
| 0.0243 | 3.0 | 900 | 0.0219 | 0.6721 | 0.7579 | 0.6084 |
| 0.0178 | 4.0 | 1200 | 0.0177 | 0.7505 | 0.8128 | 0.6908 |
| 0.014 | 5.0 | 1500 | 0.0151 | 0.7905 | 0.8376 | 0.7278 |
| 0.0115 | 6.0 | 1800 | 0.0135 | 0.8132 | 0.8589 | 0.7555 |
| 0.0096 | 7.0 | 2100 | 0.0124 | 0.8291 | 0.8727 | 0.7725 |
| 0.0082 | 8.0 | 2400 | 0.0124 | 0.8335 | 0.8757 | 0.7822 |
| 0.0071 | 9.0 | 2700 | 0.0119 | 0.8392 | 0.8847 | 0.7883 |
| 0.0064 | 10.0 | 3000 | 0.0123 | 0.8339 | 0.8810 | 0.7828 |
| 0.0058 | 11.0 | 3300 | 0.0114 | 0.8538 | 0.8999 | 0.8047 |
| 0.0053 | 12.0 | 3600 | 0.0113 | 0.8525 | 0.8967 | 0.8044 |
| 0.0048 | 13.0 | 3900 | 0.0115 | 0.8520 | 0.8982 | 0.8029 |
| 0.0045 | 14.0 | 4200 | 0.0111 | 0.8566 | 0.8962 | 0.8104 |
| 0.0042 | 15.0 | 4500 | 0.0110 | 0.8610 | 0.9060 | 0.8165 |
| 0.0039 | 16.0 | 4800 | 0.0112 | 0.8583 | 0.9021 | 0.8138 |
| 0.0037 | 17.0 | 5100 | 0.0110 | 0.8620 | 0.9055 | 0.8196 |
| 0.0035 | 18.0 | 5400 | 0.0110 | 0.8629 | 0.9063 | 0.8196 |
| 0.0035 | 19.0 | 5700 | 0.0111 | 0.8624 | 0.9062 | 0.8180 |
| 0.0034 | 20.0 | 6000 | 0.0111 | 0.8626 | 0.9055 | 0.8177 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
TheBloke/PlatYi-34B-Llama-Q-v3-AWQ
|
TheBloke
| 2023-12-17T18:31:01Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"base_model:kyujinpy/PlatYi-34B-Llama-Q-v3",
"base_model:quantized:kyujinpy/PlatYi-34B-Llama-Q-v3",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-12-17T17:21:16Z |
---
base_model: kyujinpy/PlatYi-34B-Llama-Q-v3
datasets:
- garage-bAInd/Open-Platypus
inference: false
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
model_creator: KyujinHan
model_name: PlatYi 34B Llama Q V3
model_type: yi
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# PlatYi 34B Llama Q V3 - AWQ
- Model creator: [KyujinHan](https://huggingface.co/kyujinpy)
- Original model: [PlatYi 34B Llama Q V3](https://huggingface.co/kyujinpy/PlatYi-34B-Llama-Q-v3)
<!-- description start -->
## Description
This repo contains AWQ model files for [KyujinHan's PlatYi 34B Llama Q V3](https://huggingface.co/kyujinpy/PlatYi-34B-Llama-Q-v3).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-GGUF)
* [KyujinHan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kyujinpy/PlatYi-34B-Llama-Q-v3)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/PlatYi-34B-Llama-Q-v3-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 19.23 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/PlatYi-34B-Llama-Q-v3-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `PlatYi-34B-Llama-Q-v3-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/PlatYi-34B-Llama-Q-v3-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/PlatYi-34B-Llama-Q-v3-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/PlatYi-34B-Llama-Q-v3-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/PlatYi-34B-Llama-Q-v3-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: KyujinHan's PlatYi 34B Llama Q V3
# **PlatYi-34B-Llama-Q-v3**
<img src='./PlatYi.png' width=256>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
PlatYi-34B-Llama-Q-v3 is an auto-regressive language model based on the Yi-34B transformer architecture.
**Blog Link**
Blog: [Coming soon...]
Github: [Coming soon...]
**Base Model**
[chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama)
**Training Dataset**
[garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
## Fix some bugs
- Before model, there is some mistakes.
- I modified the templates and warmup_steps.
## Notice
While training, I used Q-LoRA.
The lora_r values is 64.
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PlatYi-34B-Llama-Q-v3 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| PlatYi-34B-Llama-Q-v2 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| PlatYi-34B-Llama-Q | 71.13 | 65.70 | 85.22 | 78.78 | 53.64 | 83.03 | 60.42 |
| PlatYi-34B-Llama | 68.37 | 67.83 | 85.35 | 78.26 | 53.46 | 82.87 | 42.46 |
| [Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) | 70.95 | 64.59 | 85.63 | 76.31 | 55.60 | 82.79 | 60.80 |
| [Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/PlatYi-34B-Llama-Q-v3"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
|
TheBloke/PiVoT-MoE-GPTQ
|
TheBloke
| 2023-12-17T18:30:04Z | 27 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"base_model:maywell/PiVoT-MoE",
"base_model:quantized:maywell/PiVoT-MoE",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-12-17T16:20:29Z |
---
base_model: maywell/PiVoT-MoE
inference: false
license: cc-by-nc-4.0
model_creator: Jeonghwan Park
model_name: Pivot MoE
model_type: mixtral
prompt_template: '{system_message}
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pivot MoE - GPTQ
- Model creator: [Jeonghwan Park](https://huggingface.co/maywell)
- Original model: [Pivot MoE](https://huggingface.co/maywell/PiVoT-MoE)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Jeonghwan Park's Pivot MoE](https://huggingface.co/maywell/PiVoT-MoE).
Mixtral GPTQs currently require:
* Transformers 4.36.0 or later
* either, AutoGPTQ 0.6 compiled from source, or
* Transformers 4.37.0.dev0 compiled from Github with: `pip3 install git+https://github.com/huggingface/transformers`
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/PiVoT-MoE-GGUF)
* [Jeonghwan Park's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/maywell/PiVoT-MoE)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca-System
```
{system_message}
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
Mixtral GPTQs currently have special requirements - see Description above.
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.50 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 19.18 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 21.28 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 14.02 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 14.66 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 16.66 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 36.42 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 37.24 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/PiVoT-MoE-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/PiVoT-MoE-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `PiVoT-MoE-GPTQ`:
```shell
mkdir PiVoT-MoE-GPTQ
huggingface-cli download TheBloke/PiVoT-MoE-GPTQ --local-dir PiVoT-MoE-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir PiVoT-MoE-GPTQ
huggingface-cli download TheBloke/PiVoT-MoE-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir PiVoT-MoE-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir PiVoT-MoE-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/PiVoT-MoE-GPTQ --local-dir PiVoT-MoE-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/PiVoT-MoE-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
**NOTE**: Requires:
* Transformers 4.36.0, or Transformers 4.37.0.dev0 from Github
* Either AutoGPTQ 0.6 compiled from source and `Loader: AutoGPTQ`,
* or, `Loader: Transformers`, if you installed Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers`
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/PiVoT-MoE-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/PiVoT-MoE-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `PiVoT-MoE-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
Not currently supported for Mixtral models.
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.37.0.dev0 from Github, Optimum 1.16.0 or later, and AutoGPTQ 0.5.1 or later.
```shell
pip3 install --upgrade "git+https://github.com/huggingface/transformers" optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
DISABLE_QIGEN=1 pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/PiVoT-MoE-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{system_message}
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ 0.6 (compiled from source) and Transformers 4.37.0 (installed from Github).
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jeonghwan Park's Pivot MoE
# PiVot-MoE

## Model Description
PiVoT-MoE, is an advanced AI model specifically designed for roleplaying purposes. It has been trained using a combination of four 10.7B sized experts, each with their own specialized characteristic, all fine-tuned to bring a unique and diverse roleplaying experience.
The Mixture of Experts (MoE) technique is utilized in this model, allowing the experts to work together synergistically, resulting in a more cohesive and natural conversation flow. The MoE architecture allows for a higher level of flexibility and adaptability, enabling PiVoT-MoE to handle a wide variety of roleplaying scenarios and characters.
Based on the PiVoT-10.7B-Mistral-v0.2-RP model, PiVoT-MoE takes it a step further with the incorporation of the MoE technique. This means that not only does the model have an expansive knowledge base, but it also has the ability to mix and match its expertise to better suit the specific roleplaying scenario.
## Prompt Template - Alpaca (ChatML works)
```
{system}
### Instruction:
{instruction}
### Response:
{response}
```
|
owanr/SChem5Labels-roberta-base-inter-frequency-human_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T18:26:06Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T18:25:48Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-roberta-base-inter-frequency-human_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-roberta-base-inter-frequency-human_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.535 | 1.0 | 3164 | 7.4255 |
| 7.625 | 2.0 | 6328 | 7.4255 |
| 7.694 | 3.0 | 9492 | 7.4255 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
oSabre/opus_books_es_pt
|
oSabre
| 2023-12-17T18:25:17Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-17T11:25:33Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: opus_books_es_pt
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: es-pt
split: train
args: es-pt
metrics:
- name: Bleu
type: bleu
value: 1.2169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus_books_es_pt
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0763
- Bleu: 1.2169
- Gen Len: 18.5038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 133 | 2.5227 | 0.5795 | 18.5789 |
| No log | 2.0 | 266 | 2.3918 | 0.6703 | 18.5451 |
| No log | 3.0 | 399 | 2.3166 | 0.8471 | 18.5301 |
| 2.6664 | 4.0 | 532 | 2.2665 | 0.8914 | 18.4737 |
| 2.6664 | 5.0 | 665 | 2.2319 | 0.928 | 18.4549 |
| 2.6664 | 6.0 | 798 | 2.2025 | 1.0067 | 18.5113 |
| 2.6664 | 7.0 | 931 | 2.1784 | 1.0162 | 18.515 |
| 2.2503 | 8.0 | 1064 | 2.1580 | 1.1102 | 18.5113 |
| 2.2503 | 9.0 | 1197 | 2.1420 | 1.0638 | 18.515 |
| 2.2503 | 10.0 | 1330 | 2.1257 | 1.1149 | 18.5113 |
| 2.2503 | 11.0 | 1463 | 2.1142 | 1.1334 | 18.4474 |
| 2.1172 | 12.0 | 1596 | 2.1091 | 1.1308 | 18.4925 |
| 2.1172 | 13.0 | 1729 | 2.0980 | 1.1655 | 18.5075 |
| 2.1172 | 14.0 | 1862 | 2.0950 | 1.1464 | 18.4925 |
| 2.1172 | 15.0 | 1995 | 2.0890 | 1.1383 | 18.5038 |
| 2.0185 | 16.0 | 2128 | 2.0833 | 1.1671 | 18.5 |
| 2.0185 | 17.0 | 2261 | 2.0806 | 1.1555 | 18.5038 |
| 2.0185 | 18.0 | 2394 | 2.0777 | 1.15 | 18.5113 |
| 1.9882 | 19.0 | 2527 | 2.0770 | 1.2252 | 18.5113 |
| 1.9882 | 20.0 | 2660 | 2.0763 | 1.2169 | 18.5038 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
adityamavle/ppo-LunarLander-v3
|
adityamavle
| 2023-12-17T18:22:46Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-17T18:22:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -507.76 +/- 138.13
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
owanr/ghc-roberta-base-inter-sorted-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T18:19:36Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T18:19:18Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: ghc-roberta-base-inter-sorted-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ghc-roberta-base-inter-sorted-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.904 | 1.0 | 11020 | 0.9064 |
| 0.859 | 2.0 | 22040 | 0.9064 |
| 0.901 | 3.0 | 33060 | 0.9064 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
karawalla/mistral_b_karawalla_aqclv1002
|
karawalla
| 2023-12-17T18:19:29Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"region:us"
] | null | 2023-12-17T18:19:12Z |
---
library_name: peft
base_model: mistralai/Mixtral-8x7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
owanr/SChem5Labels-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T18:14:49Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T18:14:29Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-roberta-base-intra-shuffle-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.981 | 1.0 | 3164 | 6.6970 |
| 6.834 | 2.0 | 6328 | 6.6970 |
| 7.035 | 3.0 | 9492 | 6.6970 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
A2H0H0R1/Llama-2-7b-chat-hf-biology-2
|
A2H0H0R1
| 2023-12-17T18:07:13Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"llama",
"llama-factory",
"lora",
"generated_from_trainer",
"biology",
"dataset:A2H0H0R1/Animal-nutrition-pair",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:other",
"region:us"
] | null | 2023-12-17T17:19:17Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
- biology
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: dpo_model
results: []
datasets:
- A2H0H0R1/Animal-nutrition-pair
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_model
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the Animal-nutrition-pair dataset and DPO fine tunning type.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Bilal326/SD_2.0_DreamBooth_DragonWarrior
|
Bilal326
| 2023-12-17T18:04:14Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"StableDiffusion",
"KungfuPanda",
"DreamBooth",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-17T16:22:36Z |
---
license: apache-2.0
tags:
- StableDiffusion
- KungfuPanda
- DreamBooth
---
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_SystemError0.2_Seed103
|
behzadnet
| 2023-12-17T18:02:53Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-17T18:02:47Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
owanr/SChem5Labels-roberta-base-inter-shuffle-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T18:02:50Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T18:02:32Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-roberta-base-inter-shuffle-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-roberta-base-inter-shuffle-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.958 | 1.0 | 3164 | 6.9268 |
| 7.27 | 2.0 | 6328 | 6.9268 |
| 7.077 | 3.0 | 9492 | 6.9268 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Mixtral-8x7B-v0.1-6.0bpw-h6-exl2-2
|
LoneStriker
| 2023-12-17T17:51:15Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T17:28:08Z |
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
---
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Notice
Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
owanr/ghc-roberta-base-inter-sorted-human_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T17:42:14Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T17:41:56Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: ghc-roberta-base-inter-sorted-human_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ghc-roberta-base-inter-sorted-human_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.194 | 1.0 | 11020 | 0.1930 |
| 0.174 | 2.0 | 22040 | 0.1930 |
| 0.211 | 3.0 | 33060 | 0.1930 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
owanr/SChem5Labels-roberta-base-intra-sorted-model_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T17:39:39Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T17:39:21Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-roberta-base-intra-sorted-model_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-roberta-base-intra-sorted-model_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.963 | 1.0 | 3164 | 7.4949 |
| 7.634 | 2.0 | 6328 | 7.4949 |
| 7.963 | 3.0 | 9492 | 7.4949 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
prashantyai/my_awesome_eli5_mlm_model
|
prashantyai
| 2023-12-17T17:39:32Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-17T17:08:06Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_keras_callback
model-index:
- name: prashantyai/my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# prashantyai/my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8890
- Validation Loss: 1.7635
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0236 | 1.8024 | 0 |
| 1.9394 | 1.8156 | 1 |
| 1.8890 | 1.7635 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
BrianHsu/BERT_test_graident_accumulation_test3
|
BrianHsu
| 2023-12-17T17:37:48Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-12-17T15:57:28Z |
---
base_model: bert-base-chinese
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT_test_graident_accumulation_test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_test_graident_accumulation_test3
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0101
- Accuracy: 0.6102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 94 | 0.9398 | 0.6007 |
| No log | 2.0 | 188 | 0.9191 | 0.6183 |
| No log | 3.0 | 282 | 1.0101 | 0.6102 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_5x_deit_tiny_adamax_0001_fold4
|
hkivancoral
| 2023-12-17T17:30:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-14T10:32:36Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_tiny_adamax_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_adamax_0001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2292
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2019 | 1.0 | 375 | 0.3616 | 0.8683 |
| 0.2348 | 2.0 | 750 | 0.5390 | 0.7983 |
| 0.0464 | 3.0 | 1125 | 0.5043 | 0.88 |
| 0.0924 | 4.0 | 1500 | 0.5883 | 0.8833 |
| 0.0137 | 5.0 | 1875 | 0.7305 | 0.8783 |
| 0.0256 | 6.0 | 2250 | 0.8161 | 0.8783 |
| 0.0006 | 7.0 | 2625 | 0.7997 | 0.8833 |
| 0.0263 | 8.0 | 3000 | 0.8542 | 0.885 |
| 0.0002 | 9.0 | 3375 | 0.9159 | 0.87 |
| 0.0 | 10.0 | 3750 | 0.9248 | 0.8833 |
| 0.0181 | 11.0 | 4125 | 1.0824 | 0.8633 |
| 0.0031 | 12.0 | 4500 | 0.9537 | 0.89 |
| 0.0115 | 13.0 | 4875 | 1.0751 | 0.8667 |
| 0.0169 | 14.0 | 5250 | 0.8764 | 0.8867 |
| 0.0 | 15.0 | 5625 | 0.9541 | 0.8817 |
| 0.0 | 16.0 | 6000 | 1.0324 | 0.87 |
| 0.0003 | 17.0 | 6375 | 1.0424 | 0.8733 |
| 0.0131 | 18.0 | 6750 | 1.0393 | 0.8767 |
| 0.0 | 19.0 | 7125 | 1.0119 | 0.8867 |
| 0.0 | 20.0 | 7500 | 0.9792 | 0.8833 |
| 0.0 | 21.0 | 7875 | 1.0247 | 0.88 |
| 0.0 | 22.0 | 8250 | 1.0061 | 0.885 |
| 0.0 | 23.0 | 8625 | 1.0234 | 0.8867 |
| 0.0 | 24.0 | 9000 | 1.0734 | 0.8733 |
| 0.0 | 25.0 | 9375 | 1.0638 | 0.8867 |
| 0.0 | 26.0 | 9750 | 1.0711 | 0.88 |
| 0.0 | 27.0 | 10125 | 1.1175 | 0.88 |
| 0.0 | 28.0 | 10500 | 1.0879 | 0.8867 |
| 0.0 | 29.0 | 10875 | 1.1361 | 0.8817 |
| 0.0 | 30.0 | 11250 | 1.1028 | 0.89 |
| 0.0 | 31.0 | 11625 | 1.1478 | 0.8817 |
| 0.0 | 32.0 | 12000 | 1.1406 | 0.8833 |
| 0.0 | 33.0 | 12375 | 1.1490 | 0.8833 |
| 0.0 | 34.0 | 12750 | 1.1669 | 0.8817 |
| 0.0 | 35.0 | 13125 | 1.1635 | 0.8833 |
| 0.0 | 36.0 | 13500 | 1.1789 | 0.8817 |
| 0.0 | 37.0 | 13875 | 1.1756 | 0.8833 |
| 0.0029 | 38.0 | 14250 | 1.1808 | 0.8833 |
| 0.0 | 39.0 | 14625 | 1.1891 | 0.8833 |
| 0.0 | 40.0 | 15000 | 1.1976 | 0.8833 |
| 0.0 | 41.0 | 15375 | 1.2036 | 0.8817 |
| 0.0 | 42.0 | 15750 | 1.2058 | 0.88 |
| 0.0 | 43.0 | 16125 | 1.2107 | 0.8817 |
| 0.0 | 44.0 | 16500 | 1.2163 | 0.88 |
| 0.0 | 45.0 | 16875 | 1.2201 | 0.8783 |
| 0.0 | 46.0 | 17250 | 1.2238 | 0.8783 |
| 0.0 | 47.0 | 17625 | 1.2266 | 0.88 |
| 0.0 | 48.0 | 18000 | 1.2286 | 0.88 |
| 0.0 | 49.0 | 18375 | 1.2293 | 0.88 |
| 0.0 | 50.0 | 18750 | 1.2292 | 0.88 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
owanr/SChem5Labels-roberta-base-inter-sorted-human_annots_alpha0.0_whole_1e-05
|
owanr
| 2023-12-17T17:15:56Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-12-17T17:15:39Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-roberta-base-inter-sorted-human_annots_alpha0.0_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-roberta-base-inter-sorted-human_annots_alpha0.0_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.419 | 1.0 | 3164 | 8.2285 |
| 8.423 | 2.0 | 6328 | 8.2285 |
| 8.528 | 3.0 | 9492 | 8.2285 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Mixtral-8x7B-v0.1-4.0bpw-h6-exl2-2
|
LoneStriker
| 2023-12-17T17:08:24Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T16:26:34Z |
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
---
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Notice
Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
MattGarber/output
|
MattGarber
| 2023-12-17T16:56:26Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-17T15:48:10Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - MattGarber/output
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
ShynBui/s25
|
ShynBui
| 2023-12-17T16:52:50Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-04T16:15:52Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: s25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# s25
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
neopolita/LunarLander-v2
|
neopolita
| 2023-12-17T16:48:00Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-17T16:47:55Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -186.54 +/- 54.20
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'neopolita/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ
|
TheBloke
| 2023-12-17T16:46:55Z | 24 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ko",
"base_model:maywell/PiVoT-10.7B-Mistral-v0.2",
"base_model:quantized:maywell/PiVoT-10.7B-Mistral-v0.2",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-12-16T10:06:57Z |
---
base_model: maywell/PiVoT-10.7B-Mistral-v0.2
inference: false
language:
- en
- ko
license: cc-by-sa-4.0
model_creator: Jeonghwan Park
model_name: Pivot 10.7B Mistral V0.2
model_type: mistral
pipeline_tag: text-generation
prompt_template: '[INST] {prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pivot 10.7B Mistral V0.2 - GPTQ
- Model creator: [Jeonghwan Park](https://huggingface.co/maywell)
- Original model: [Pivot 10.7B Mistral V0.2](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Jeonghwan Park's Pivot 10.7B Mistral V0.2](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-GGUF)
* [Jeonghwan Park's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Mistral
```
[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.98 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.59 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.25 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.99 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.18 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `PiVoT-10.7B-Mistral-v0.2-GPTQ`:
```shell
mkdir PiVoT-10.7B-Mistral-v0.2-GPTQ
huggingface-cli download TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ --local-dir PiVoT-10.7B-Mistral-v0.2-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir PiVoT-10.7B-Mistral-v0.2-GPTQ
huggingface-cli download TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir PiVoT-10.7B-Mistral-v0.2-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir PiVoT-10.7B-Mistral-v0.2-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ --local-dir PiVoT-10.7B-Mistral-v0.2-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `PiVoT-10.7B-Mistral-v0.2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''[INST] {prompt} [/INST]
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/PiVoT-10.7B-Mistral-v0.2-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''[INST] {prompt} [/INST]
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jeonghwan Park's Pivot 10.7B Mistral V0.2
# PiVoT-10.7B-Mistral-v0.2

# **Model Details**
### Description
PivoT is Finetuned model based on merge of Mistral 0.2.
SFT + DPO used when training.
Follow me on twitter: https://twitter.com/stablefluffy
Consider Support me making these model alone: https://www.buymeacoffee.com/mwell or with Runpod Credit Gift 💕
Contact me on Telegram: https://t.me/AlzarTakkarsen
|
NExtNewChattingAI/shark_tank_ai_7_b
|
NExtNewChattingAI
| 2023-12-17T16:43:55Z | 1,605 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T16:23:32Z |
---
license: apache-2.0
language:
- en
---
This model is based on <a href="https://huggingface.co/viethq188/LeoScorpius-7B-Chat-DPO"> LeoScorpius </a> trained on internal data.
---
license: apache-2.0
---
Chatbot is a highly advanced artificial intelligence designed to provide you with personalized assistance and support. With its natural language processing capabilities, it can understand and respond to a wide range of queries and requests, making it a valuable tool for both personal and professional use.
The chatbot is equipped with a vast knowledge base, allowing it to provide accurate and reliable information on a wide range of topics, from general knowledge to specific industry-related information. It can also perform tasks such as scheduling appointments, sending emails, and even ordering products online.
One of the standout features of this assistant chatbot is its ability to learn and adapt to your individual preferences and needs. Over time, it can become more personalized to your specific requirements, making it an even more valuable asset to your daily life.
The chatbot is also designed to be user-friendly and intuitive, with a simple and easy-to-use interface that allows you to interact with it in a natural and conversational way. Whether you're looking for information, need help with a task, or just want to chat, your assistant chatbot is always ready and available to assist you.
|
Kooten/Noromaid-13b-v0.2-6bpw-exl2
|
Kooten
| 2023-12-17T16:40:47Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T15:45:40Z |
---
license: cc-by-nc-4.0
---
# This is a 6BPW EXL2 quant of Noromaid-13b-v0.2
Exllama 2 quant of [NeverSleep/Noromaid-13b-v0.2](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2)
## Prompt template: Custom format, or Alpaca
### Custom format:
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-aann-indef-non_num_removal-1e-4
|
kanishka
| 2023-12-17T16:33:08Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual-babylm-aanns_indef_non_num_removal",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T03:21:09Z |
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual-babylm-aanns_indef_non_num_removal
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-aann-indef-non_num_removal-1e-4
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual-babylm-aanns_indef_non_num_removal
type: kanishka/counterfactual-babylm-aanns_indef_non_num_removal
metrics:
- name: Accuracy
type: accuracy
value: 0.4052309408152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-aann-indef-non_num_removal-1e-4
This model was trained from scratch on the kanishka/counterfactual-babylm-aanns_indef_non_num_removal dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4253
- Accuracy: 0.4052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 4.0479 | 1.0 | 18592 | 4.2707 | 0.3092 |
| 3.5639 | 2.0 | 37184 | 3.7423 | 0.3625 |
| 3.3891 | 3.0 | 55776 | 3.5886 | 0.3789 |
| 3.2863 | 4.0 | 74368 | 3.4958 | 0.3879 |
| 3.2196 | 5.0 | 92960 | 3.4607 | 0.3931 |
| 3.1627 | 6.0 | 111552 | 3.4520 | 0.3956 |
| 3.1282 | 7.0 | 130144 | 3.4094 | 0.3982 |
| 3.0897 | 8.0 | 148736 | 3.4137 | 0.3995 |
| 3.0631 | 9.0 | 167328 | 3.4069 | 0.4010 |
| 3.0316 | 10.0 | 185920 | 3.4121 | 0.4018 |
| 3.0154 | 11.0 | 204512 | 3.4134 | 0.4020 |
| 2.9887 | 12.0 | 223104 | 3.4061 | 0.4032 |
| 2.9637 | 13.0 | 241696 | 3.4075 | 0.4038 |
| 2.9493 | 14.0 | 260288 | 3.4058 | 0.4045 |
| 2.9268 | 15.0 | 278880 | 3.4043 | 0.4047 |
| 2.9095 | 16.0 | 297472 | 3.4192 | 0.4048 |
| 2.8912 | 17.0 | 316064 | 3.4116 | 0.4050 |
| 2.875 | 18.0 | 334656 | 3.4216 | 0.4049 |
| 2.8542 | 19.0 | 353248 | 3.4266 | 0.4052 |
| 2.8429 | 20.0 | 371840 | 3.4253 | 0.4052 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.