modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-26 12:28:17
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-26 12:22:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mrupar/flan-t5-small-samsum
|
mrupar
| 2023-12-19T18:44:17Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-19T18:25:59Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-small-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 42.6693
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6754
- Rouge1: 42.6693
- Rouge2: 18.3378
- Rougel: 35.2729
- Rougelsum: 38.9033
- Gen Len: 16.8474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 52
- eval_batch_size: 52
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8824 | 0.35 | 100 | 1.7015 | 42.4703 | 18.3068 | 35.1199 | 38.8083 | 16.6532 |
| 1.8578 | 0.7 | 200 | 1.6878 | 42.0064 | 18.2236 | 34.9497 | 38.4611 | 16.7216 |
| 1.835 | 1.06 | 300 | 1.6823 | 42.7407 | 18.5955 | 35.4344 | 38.9663 | 16.9048 |
| 1.8144 | 1.41 | 400 | 1.6786 | 42.6272 | 18.3894 | 35.34 | 38.8868 | 16.6618 |
| 1.8094 | 1.76 | 500 | 1.6754 | 42.6693 | 18.3378 | 35.2729 | 38.9033 | 16.8474 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
temova/flan-t5-small-samsum
|
temova
| 2023-12-19T18:44:11Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-19T18:24:33Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-small-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 42.6907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6754
- Rouge1: 42.6907
- Rouge2: 18.3626
- Rougel: 35.2723
- Rougelsum: 38.9062
- Gen Len: 16.8474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 52
- eval_batch_size: 52
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8824 | 0.35 | 100 | 1.7015 | 42.4935 | 18.3634 | 35.0823 | 38.8358 | 16.6532 |
| 1.8578 | 0.7 | 200 | 1.6878 | 42.0329 | 18.2685 | 34.9421 | 38.4636 | 16.7216 |
| 1.835 | 1.06 | 300 | 1.6823 | 42.7493 | 18.6379 | 35.4001 | 38.9845 | 16.9048 |
| 1.8144 | 1.41 | 400 | 1.6786 | 42.6157 | 18.4093 | 35.3149 | 38.8787 | 16.6618 |
| 1.8094 | 1.76 | 500 | 1.6754 | 42.6907 | 18.3626 | 35.2723 | 38.9062 | 16.8474 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
bogdansinik/flan-t5-small-samsum
|
bogdansinik
| 2023-12-19T18:44:03Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-19T18:24:35Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-small-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 42.6945
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6754
- Rouge1: 42.6945
- Rouge2: 18.3618
- Rougel: 35.2788
- Rougelsum: 38.882
- Gen Len: 16.8474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 52
- eval_batch_size: 52
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8824 | 0.35 | 100 | 1.7015 | 42.4473 | 18.3181 | 35.1241 | 38.7812 | 16.6532 |
| 1.8578 | 0.7 | 200 | 1.6878 | 41.9935 | 18.2168 | 34.9802 | 38.4322 | 16.7216 |
| 1.835 | 1.06 | 300 | 1.6823 | 42.7527 | 18.6238 | 35.4172 | 38.9582 | 16.9048 |
| 1.8144 | 1.41 | 400 | 1.6786 | 42.6149 | 18.4073 | 35.3408 | 38.8646 | 16.6618 |
| 1.8094 | 1.76 | 500 | 1.6754 | 42.6945 | 18.3618 | 35.2788 | 38.882 | 16.8474 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Petrovilija/flan-t5-small-samsum
|
Petrovilija
| 2023-12-19T18:44:00Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-19T18:24:40Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-small-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 42.739
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6754
- Rouge1: 42.739
- Rouge2: 18.3741
- Rougel: 35.2588
- Rougelsum: 38.893
- Gen Len: 16.8474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 52
- eval_batch_size: 52
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8824 | 0.35 | 100 | 1.7015 | 42.5324 | 18.3468 | 35.0528 | 38.7814 | 16.6532 |
| 1.8578 | 0.7 | 200 | 1.6878 | 42.0766 | 18.2423 | 34.9442 | 38.4806 | 16.7216 |
| 1.835 | 1.06 | 300 | 1.6823 | 42.8147 | 18.6292 | 35.4054 | 38.956 | 16.9048 |
| 1.8144 | 1.41 | 400 | 1.6786 | 42.6886 | 18.402 | 35.3235 | 38.8638 | 16.6618 |
| 1.8094 | 1.76 | 500 | 1.6754 | 42.739 | 18.3741 | 35.2588 | 38.893 | 16.8474 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Veskic/flan-t5-small-samsum
|
Veskic
| 2023-12-19T18:43:52Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-19T18:24:33Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: flan-t5-small-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.6754
- eval_rouge1: 42.7098
- eval_rouge2: 18.3566
- eval_rougeL: 35.2282
- eval_rougeLsum: 38.9027
- eval_gen_len: 16.8474
- eval_runtime: 23.9949
- eval_samples_per_second: 34.132
- eval_steps_per_second: 0.667
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 52
- eval_batch_size: 52
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LaVuna47/ppo-LunarLander-v2
|
LaVuna47
| 2023-12-19T18:31:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T17:36:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.22 +/- 27.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kaitchup/Llama-2-7b-gptq-4bit
|
kaitchup
| 2023-12-19T18:29:49Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-08-29T09:52:57Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Model ID
This is Meta's Llama 2 7B quantized in 4-bit using AutoGPTQ from Hugging Face Transformers.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Model type:** Causal (Llama 2)
- **Language(s) (NLP):** English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0), [Llama 2 license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
### Model Sources
The method and code used to quantize the model are explained here:
[Quantize and Fine-tune LLMs with GPTQ Using Transformers and TRL](https://kaitchup.substack.com/p/quantize-and-fine-tune-llms-with)
## Uses
This model is pre-trained and not fine-tuned. You may fine-tune it with PEFT using adapters.
## Other quantized versions
- [kaitchup/Llama-2-7b-gptq-3bit](https://huggingface.co/kaitchup/Llama-2-7b-gptq-3bit)
- [kaitchup/Llama-2-7b-gptq-2bit](https://huggingface.co/kaitchup/Llama-2-7b-gptq-2bit)
## Model Card Contact
[The Kaitchup](https://kaitchup.substack.com/)
|
FRDY/test
|
FRDY
| 2023-12-19T18:25:02Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"region:us"
] | null | 2023-12-19T14:10:33Z |
---
library_name: peft
base_model: codellama/CodeLlama-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
manar21/model
|
manar21
| 2023-12-19T18:20:59Z | 0 | 0 |
keras
|
[
"keras",
"image-segmentation",
"en",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2023-11-19T21:34:20Z |
---
pipeline_tag: image-segmentation
license: apache-2.0
language:
- en
metrics:
- accuracy
library_name: keras
---
|
livingbox/scandinavian-style-v5
|
livingbox
| 2023-12-19T18:19:20Z | 0 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-19T18:15:21Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Scandinavian-style-v5 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
maraoz/mistral_instruct_generation
|
maraoz
| 2023-12-19T18:06:05Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T18:02:01Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7768 | 0.02 | 20 | 1.5506 |
| 1.5974 | 0.04 | 40 | 1.4599 |
| 1.5168 | 0.06 | 60 | 1.4403 |
| 1.5212 | 0.08 | 80 | 1.4321 |
| 1.3018 | 0.1 | 100 | 1.4259 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_10x_deit_small_sgd_0001_fold5
|
hkivancoral
| 2023-12-19T18:02:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T17:06:54Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4025
- Accuracy: 0.835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.999 | 1.0 | 750 | 1.0177 | 0.4867 |
| 0.9125 | 2.0 | 1500 | 0.9538 | 0.56 |
| 0.8354 | 3.0 | 2250 | 0.8848 | 0.64 |
| 0.7909 | 4.0 | 3000 | 0.8172 | 0.685 |
| 0.7315 | 5.0 | 3750 | 0.7535 | 0.7183 |
| 0.6641 | 6.0 | 4500 | 0.7023 | 0.7433 |
| 0.61 | 7.0 | 5250 | 0.6582 | 0.755 |
| 0.5883 | 8.0 | 6000 | 0.6232 | 0.7783 |
| 0.6057 | 9.0 | 6750 | 0.5936 | 0.79 |
| 0.5434 | 10.0 | 7500 | 0.5693 | 0.795 |
| 0.5298 | 11.0 | 8250 | 0.5500 | 0.7917 |
| 0.4881 | 12.0 | 9000 | 0.5324 | 0.8 |
| 0.5014 | 13.0 | 9750 | 0.5180 | 0.8 |
| 0.4862 | 14.0 | 10500 | 0.5060 | 0.8083 |
| 0.4712 | 15.0 | 11250 | 0.4949 | 0.81 |
| 0.4371 | 16.0 | 12000 | 0.4864 | 0.8117 |
| 0.4626 | 17.0 | 12750 | 0.4789 | 0.815 |
| 0.4294 | 18.0 | 13500 | 0.4706 | 0.815 |
| 0.4498 | 19.0 | 14250 | 0.4650 | 0.815 |
| 0.425 | 20.0 | 15000 | 0.4594 | 0.815 |
| 0.4212 | 21.0 | 15750 | 0.4532 | 0.8167 |
| 0.4517 | 22.0 | 16500 | 0.4489 | 0.82 |
| 0.4104 | 23.0 | 17250 | 0.4443 | 0.8167 |
| 0.4051 | 24.0 | 18000 | 0.4407 | 0.82 |
| 0.4019 | 25.0 | 18750 | 0.4371 | 0.8217 |
| 0.3884 | 26.0 | 19500 | 0.4338 | 0.825 |
| 0.3154 | 27.0 | 20250 | 0.4302 | 0.825 |
| 0.3994 | 28.0 | 21000 | 0.4273 | 0.8283 |
| 0.4061 | 29.0 | 21750 | 0.4246 | 0.83 |
| 0.4059 | 30.0 | 22500 | 0.4225 | 0.8283 |
| 0.3637 | 31.0 | 23250 | 0.4202 | 0.8267 |
| 0.3501 | 32.0 | 24000 | 0.4181 | 0.8283 |
| 0.4209 | 33.0 | 24750 | 0.4163 | 0.8317 |
| 0.3255 | 34.0 | 25500 | 0.4145 | 0.8317 |
| 0.3933 | 35.0 | 26250 | 0.4127 | 0.8317 |
| 0.3766 | 36.0 | 27000 | 0.4115 | 0.8317 |
| 0.3145 | 37.0 | 27750 | 0.4102 | 0.8317 |
| 0.3874 | 38.0 | 28500 | 0.4090 | 0.83 |
| 0.3898 | 39.0 | 29250 | 0.4079 | 0.83 |
| 0.365 | 40.0 | 30000 | 0.4069 | 0.8317 |
| 0.3728 | 41.0 | 30750 | 0.4059 | 0.8317 |
| 0.3865 | 42.0 | 31500 | 0.4051 | 0.8317 |
| 0.3813 | 43.0 | 32250 | 0.4045 | 0.8317 |
| 0.3607 | 44.0 | 33000 | 0.4040 | 0.8317 |
| 0.3955 | 45.0 | 33750 | 0.4034 | 0.8333 |
| 0.3317 | 46.0 | 34500 | 0.4031 | 0.835 |
| 0.4022 | 47.0 | 35250 | 0.4028 | 0.835 |
| 0.3888 | 48.0 | 36000 | 0.4026 | 0.835 |
| 0.3745 | 49.0 | 36750 | 0.4025 | 0.835 |
| 0.3 | 50.0 | 37500 | 0.4025 | 0.835 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
martyn/sdxl-dpo-turbo-dare-v0
|
martyn
| 2023-12-19T18:01:04Z | 0 | 0 | null |
[
"dare",
"super mario merge",
"pytorch",
"sdxl",
"sdxl_dpo",
"sdxl_turbo",
"merge",
"text-to-image",
"en",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-19T17:53:44Z |
---
license: mit
language:
- en
pipeline_tag: text-to-image
inference: false
tags:
- dare
- super mario merge
- pytorch
- sdxl
- sdxl_dpo
- sdxl_turbo
- merge
---
# SDXL DPO Turbo Merge
The following were merged with DARE using [https://github.com/martyn/safetensors-merge-supermario](https://github.com/martyn/safetensors-merge-supermario)
## Mergelist
```
mhdang/dpo-sdxl-text2image-v1
stabilityai/sdxl-turbo
```
## Merge command
```
python3 merge.py -p 0.13 -lambda 3.0 stable_xl_dpo.safetensors sd_xl_turbo_1.0_fp16.safetensors [output]
```
|
TheBloke/GEITje-7B-chat-AWQ
|
TheBloke
| 2023-12-19T17:53:53Z | 11 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"GEITje",
"conversational",
"nl",
"dataset:Rijgersberg/no_robots_nl",
"dataset:Rijgersberg/ultrachat_10k_nl",
"base_model:Rijgersberg/GEITje-7B-chat",
"base_model:quantized:Rijgersberg/GEITje-7B-chat",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-12-19T17:37:33Z |
---
base_model: Rijgersberg/GEITje-7B-chat
datasets:
- Rijgersberg/no_robots_nl
- Rijgersberg/ultrachat_10k_nl
inference: false
language:
- nl
license: apache-2.0
model-index:
- name: GEITje-7B-chat
results: []
model_creator: Edwin Rijgersberg
model_name: Geitje 7B Chat
model_type: mistral
pipeline_tag: conversational
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
tags:
- generated_from_trainer
- GEITje
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Geitje 7B Chat - AWQ
- Model creator: [Edwin Rijgersberg](https://huggingface.co/Rijgersberg)
- Original model: [Geitje 7B Chat](https://huggingface.co/Rijgersberg/GEITje-7B-chat)
<!-- description start -->
## Description
This repo contains AWQ model files for [Edwin Rijgersberg's Geitje 7B Chat](https://huggingface.co/Rijgersberg/GEITje-7B-chat).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/GEITje-7B-chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/GEITje-7B-chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF)
* [Edwin Rijgersberg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Rijgersberg/GEITje-7B-chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ToRA
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/GEITje-7B-chat-AWQ/tree/main) | 4 | 128 | [Dolly 15K Dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/GEITje-7B-chat-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `GEITje-7B-chat-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/GEITje-7B-chat-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|user|>
{prompt}
<|assistant|>
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/GEITje-7B-chat-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/GEITje-7B-chat-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|user|>
{prompt}
<|assistant|>
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/GEITje-7B-chat-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<|user|>
{prompt}
<|assistant|>
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Edwin Rijgersberg's Geitje 7B Chat
# GEITje-7B-chat
# GEITje-7B
GEITje is a large open Dutch language model with 7 billion parameters, based on Mistral 7B.
It has been further trained on 10 billion tokens of Dutch text.
This has improved its Dutch language skills and increased its knowledge of Dutch topics.
## Model description
### _Mistral_ – Base Model
GEITje is based on [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/).
It's a large open language model with 7 billion parameters,
trained by [Mistral AI](https://mistral.ai).
According to Mistral AI, the 7B model performs better than [Llama 2](https://ai.meta.com/llama/) 13B on all (English-language) benchmarks they tested it on.
Mistral 7B has been released under the Apache 2.0 open source license.
### _GEITje_ – Trained Further on Dutch Texts
GEITje was created by further training Mistral 7B on no less than 10 billion tokens of Dutch text from the [Dutch Gigacorpus](http://gigacorpus.nl) and the [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) web crawling corpus.
It is a so-called _full-parameter finetune_:
performed on all parameters.
It is not a [PEFT](https://huggingface.co/blog/peft) or [LoRA](https://huggingface.co/docs/peft/conceptual_guides/lora) finetune.
Like Mistral, GEITje has a _context length_ of 8,192 tokens.
### _GEITje-chat_ – Finetuned for Dialogues
As a demonstration of GEITje's capabilities for chat applications, two initial chat variants of GEITje have also been finetuned: GEITje-chat and GEITje-chat-v2.
They can follow instructions, answer questions, and hold dialogues on a variety of topics.
## More info
Read more about GEITje-chat in the [📄 README](https://github.com/Rijgersberg/GEITje/blob/main/README-en.md) on GitHub.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0263 | 0.2 | 236 | 0.9482 |
| 1.0368 | 0.4 | 472 | 0.9574 |
| 0.9503 | 0.6 | 708 | 0.9492 |
| 1.1419 | 0.8 | 944 | 0.9406 |
| 1.2161 | 1.0 | 1180 | 0.9317 |
| 0.6695 | 1.2 | 1416 | 0.9407 |
| 0.7379 | 1.4 | 1652 | 0.9350 |
| 0.7695 | 1.6 | 1888 | 0.9282 |
| 0.6795 | 1.8 | 2124 | 0.9218 |
| 0.6217 | 2.0 | 2360 | 0.9174 |
| 0.438 | 2.2 | 2596 | 0.9546 |
| 0.3719 | 2.39 | 2832 | 0.9546 |
| 0.4853 | 2.59 | 3068 | 0.9548 |
| 0.3852 | 2.79 | 3304 | 0.9548 |
| 0.48 | 2.99 | 3540 | 0.9548 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
erfanzar/LinguaMatic-2.7B-GGUF
|
erfanzar
| 2023-12-19T17:51:55Z | 9 | 1 | null |
[
"gguf",
"code",
"text-generation",
"en",
"fr",
"es",
"dataset:erfanzar/UltraChat-Mixin",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-19T17:42:37Z |
---
datasets:
- erfanzar/UltraChat-Mixin
language:
- en
- fr
- es
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- code
---
# LinguaMatic
LinguaMatic is an advanced AI model designed to handle a wide range of Natural Language Processing (NLP) tasks. With its powerful capabilities, LinguaMatic can assist with tasks such as text classification, sentiment analysis, language translation, question answering, and much more.
## EasyDel
The model is finetuned Using a custom version of UltraChat on TPU-v4 POD using [EasyDel](https://github.com/erfanzar/EasyDeL)
## Prompting Method
LinguaMatic utilizes the OC prompting method to generate responses. This method, named after the friendly and intelligent llama, enhances the model's ability to engage in meaningful conversations. The `prompt_model` function provided below demonstrates how the llama2 prompting method is implemented:
```python
def os_chat_template(
message: str,
chat_history: Optional[List[str] | List[List[str]]] = None,
system_prompt: Optional[str] = None
):
if chat_history is None:
chat_history = []
system = f"<|system|>\n{system_prompt}</s>" if system_prompt is not None else ""
ua = ""
for user_input, response in chat_history:
ua += f"<|user|>\n{user_input}</s>\n" + f"<|assistant|>\n{response}</s>\n"
return system + ua + f"<|user|>\n{message}</s>\n<|assistant|>\n"
```
The `prompt_model` function takes a `message` as input, along with the `chat_history` and `system_prompt`. It generates a formatted text that includes the system prompt, user inputs, and the current message. This approach allows LinguaMatic to maintain context and provide more coherent and context-aware responses.
## Contributing
We welcome contributions to enhance LinguaMatic's capabilities and improve its performance. If you encounter any issues or have suggestions for improvement, please feel free to submit a pull request or open an issue on [EasyDel](https://github.com/erfanzar/EasyDeL) GitHub repository.
|
Jorkieboe/model
|
Jorkieboe
| 2023-12-19T17:44:41Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-17T10:19:05Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: delft blue style
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Jorkieboe/model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on delft blue style using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
extraltodeus/Bise_7B_m37_SSRD
|
extraltodeus
| 2023-12-19T17:43:45Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-12-19T16:57:05Z |
A merge of the following models:
TheBloke_Mistral-7B-Claude-Chat-GPTQ
TheBloke_airoboros-mistral2.2-7B-GPTQ
TheBloke_ANIMA-Phi-Neptune-Mistral-7B-GPTQ
TheBloke_Arithmo-Mistral-7B-GPTQ
TheBloke_AshhLimaRP-Mistral-7B-GPTQ
TheBloke_Astrid-Mistral-7B-GPTQ
TheBloke_Autolycus-Mistral_7B-GPTQ
TheBloke_Barcenas-Mistral-7B-GPTQ
TheBloke_blossom-v3-mistral-7B-GPTQ
TheBloke_CollectiveCognition-v1.1-Mistral-7B-GPTQ
TheBloke_dolphin-2.2.1-mistral-7B-GPTQ
TheBloke_Free_Sydney_V2_Mistral_7b-GPTQ
TheBloke_Generate_Question_Mistral_7B-GPTQ
TheBloke_Hermes-Trismegistus-Mistral-7B-GPTQ
TheBloke_Karen_TheEditor_V2_CREATIVE_Mistral_7B-GPTQ
TheBloke_Kimiko-Mistral-7B-GPTQ
TheBloke_Leo-Mistral-Hessianai-7B-Chat-GPTQ
TheBloke_MetaMath-Mistral-7B-GPTQ
TheBloke_Mistral-7B-AEZAKMI-v1-GPTQ
TheBloke_mistral-7B-dpo-v5-GPTQ
TheBloke_Mistral-7B-OpenOrca-GPTQ
TheBloke_Mistral-ClaudeLimaRP-v3-7B-GPTQ
TheBloke_Mistral-Trismegistus-7B-GPTQ
TheBloke_MistralLite-7B-GPTQ
TheBloke_mistral_7b_norobots-GPTQ
TheBloke_NeuralHermes-2.5-Mistral-7B-GPTQ
TheBloke_openbuddy-mistral-7B-v13.1-GPTQ
TheBloke_OpenHermes-2.5-Mistral-7B-GPTQ
TheBloke_openinstruct-mistral-7B-GPTQ
TheBloke_PiVoT-10.7B-Mistral-v0.2-RP-GPTQ
TheBloke_saiga_mistral_7b-GPTQ
TheBloke_samantha-1.2-mistral-7B-GPTQ
TheBloke_SauerkrautLM-7B-v1-mistral-GPTQ
TheBloke_SlimOpenOrca-Mistral-7B-GPTQ
TheBloke_speechless-code-mistral-7B-v1.0-GPTQ
TheBloke_Thespis-Mistral-7B-v0.6-GPTQ
TheBloke_Writing_Partner_Mistral_7B-GPTQ
The method used was to select each value that had the smallest sum of relative absolute difference.
The config files are copies from the TheBloke_Mistral-7B-Claude-Chat-GPTQ repository.
|
phzwart/dlsia_inpainting_saxs_gisaxs
|
phzwart
| 2023-12-19T17:42:41Z | 0 | 0 | null |
[
"arxiv:2308.02559",
"license:bsd",
"region:us"
] | null | 2023-12-19T17:38:30Z |
---
license: bsd
---
Here you find models for inpainting diffraction images using Mixed Scale Dense Networks.
These models are to be used with the dlsia library:
http://dlsia.readthedocs.io
This model and data description is asspociated with the final version of this paper:
DLSIA: Deep Learning for Scientific Image Analysis
Eric J Roberts, Tanny Chavez, Alexander Hexemer, Petrus H. Zwart
https://arxiv.org/abs/2308.02559
and is described in detail in this paper:
A comparison of deep-learning-based inpainting techniques for experimental X-ray scattering
T. Chavez, E. J. Roberts, P. H. Zwart and A. Hexemer
https://doi.org/10.1107/S1600576722007105
|
TheBloke/GEITje-7B-chat-GGUF
|
TheBloke
| 2023-12-19T17:41:55Z | 142 | 3 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"generated_from_trainer",
"GEITje",
"conversational",
"nl",
"dataset:Rijgersberg/no_robots_nl",
"dataset:Rijgersberg/ultrachat_10k_nl",
"base_model:Rijgersberg/GEITje-7B-chat",
"base_model:quantized:Rijgersberg/GEITje-7B-chat",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-19T17:37:33Z |
---
base_model: Rijgersberg/GEITje-7B-chat
datasets:
- Rijgersberg/no_robots_nl
- Rijgersberg/ultrachat_10k_nl
inference: false
language:
- nl
license: apache-2.0
model-index:
- name: GEITje-7B-chat
results: []
model_creator: Edwin Rijgersberg
model_name: Geitje 7B Chat
model_type: mistral
pipeline_tag: conversational
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
tags:
- generated_from_trainer
- GEITje
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Geitje 7B Chat - GGUF
- Model creator: [Edwin Rijgersberg](https://huggingface.co/Rijgersberg)
- Original model: [Geitje 7B Chat](https://huggingface.co/Rijgersberg/GEITje-7B-chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Edwin Rijgersberg's Geitje 7B Chat](https://huggingface.co/Rijgersberg/GEITje-7B-chat).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/GEITje-7B-chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/GEITje-7B-chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF)
* [Edwin Rijgersberg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Rijgersberg/GEITje-7B-chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ToRA
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [geitje-7b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [geitje-7b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [geitje-7b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [geitje-7b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [geitje-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [geitje-7b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [geitje-7b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [geitje-7b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [geitje-7b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [geitje-7b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [geitje-7b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [geitje-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/GEITje-7B-chat-GGUF/blob/main/geitje-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/GEITje-7B-chat-GGUF and below it, a specific filename to download, such as: geitje-7b-chat.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/GEITje-7B-chat-GGUF geitje-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/GEITje-7B-chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/GEITje-7B-chat-GGUF geitje-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m geitje-7b-chat.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./geitje-7b-chat.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|user|>\n{prompt}\n<|assistant|>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./geitje-7b-chat.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Edwin Rijgersberg's Geitje 7B Chat
# GEITje-7B-chat
# GEITje-7B
GEITje is a large open Dutch language model with 7 billion parameters, based on Mistral 7B.
It has been further trained on 10 billion tokens of Dutch text.
This has improved its Dutch language skills and increased its knowledge of Dutch topics.
## Model description
### _Mistral_ – Base Model
GEITje is based on [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/).
It's a large open language model with 7 billion parameters,
trained by [Mistral AI](https://mistral.ai).
According to Mistral AI, the 7B model performs better than [Llama 2](https://ai.meta.com/llama/) 13B on all (English-language) benchmarks they tested it on.
Mistral 7B has been released under the Apache 2.0 open source license.
### _GEITje_ – Trained Further on Dutch Texts
GEITje was created by further training Mistral 7B on no less than 10 billion tokens of Dutch text from the [Dutch Gigacorpus](http://gigacorpus.nl) and the [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) web crawling corpus.
It is a so-called _full-parameter finetune_:
performed on all parameters.
It is not a [PEFT](https://huggingface.co/blog/peft) or [LoRA](https://huggingface.co/docs/peft/conceptual_guides/lora) finetune.
Like Mistral, GEITje has a _context length_ of 8,192 tokens.
### _GEITje-chat_ – Finetuned for Dialogues
As a demonstration of GEITje's capabilities for chat applications, two initial chat variants of GEITje have also been finetuned: GEITje-chat and GEITje-chat-v2.
They can follow instructions, answer questions, and hold dialogues on a variety of topics.
## More info
Read more about GEITje-chat in the [📄 README](https://github.com/Rijgersberg/GEITje/blob/main/README-en.md) on GitHub.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0263 | 0.2 | 236 | 0.9482 |
| 1.0368 | 0.4 | 472 | 0.9574 |
| 0.9503 | 0.6 | 708 | 0.9492 |
| 1.1419 | 0.8 | 944 | 0.9406 |
| 1.2161 | 1.0 | 1180 | 0.9317 |
| 0.6695 | 1.2 | 1416 | 0.9407 |
| 0.7379 | 1.4 | 1652 | 0.9350 |
| 0.7695 | 1.6 | 1888 | 0.9282 |
| 0.6795 | 1.8 | 2124 | 0.9218 |
| 0.6217 | 2.0 | 2360 | 0.9174 |
| 0.438 | 2.2 | 2596 | 0.9546 |
| 0.3719 | 2.39 | 2832 | 0.9546 |
| 0.4853 | 2.59 | 3068 | 0.9548 |
| 0.3852 | 2.79 | 3304 | 0.9548 |
| 0.48 | 2.99 | 3540 | 0.9548 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
<!-- original-model-card end -->
|
MattiaSangermano/bert-political-leaning-it
|
MattiaSangermano
| 2023-12-19T17:40:26Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"twitter",
"political-leaning",
"politics",
"it",
"dataset:politic-it",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T14:13:25Z |
---
language:
- it
tags:
- twitter
- political-leaning
- politics
datasets:
- politic-it
widget:
- text: >-
È necessario garantire salari dignitosi e condizioni di lavoro adeguate per
tutelare i diritti dei lavoratori
example_title: Left-wing example
- text: >-
L'immigrazione deve essere gestita con rigore per preservare l'identità
nazionale!
example_title: Right-wing example
model-index:
- name: bert-political-leaning-it
results:
- task:
type: text-classification
name: Text Classification
dataset:
type: social-media
name: politic-it
metrics:
- type: f1 macro
value: 61.3
- type: accuracy
value: 69.4
license: apache-2.0
metrics:
- f1
- accuracy
pipeline_tag: text-classification
---
# MattiaSangermano/bert-political-leaning-it
This model categorizes the political leaning of an Italian sentence into 4 categories: `moderate_left`, `left`, `right`, `moderate_right`. The model is a fine-tuned version of [neuraly/bert-base-italian-cased-sentiment](https://huggingface.co/neuraly/bert-base-italian-cased-sentiment).
- **Developed by:** [Mattia Sangermano](https://www.linkedin.com/in/mattia-sangermano/) and [Fabio Murgese](https://www.linkedin.com/in/fabio-murgese/)
- **Model type:** Bert
- **Language(s) (NLP):** it
- **License:** Apache 2.0
### How to Get Started with the Model
You can use this model directly with a pipeline for text classification:
``` python
from transformers import pipeline
classifier = pipeline("text-classification",model='MattiaSangermano/bert-political-leaning-it')
prediction = classifier("Sovranità nazionale e identità forte")
print(prediction)
```
Here is how to use this model to classify a text in PyTorch:
``` python
from transformers import BertForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained('MattiaSangermano/bert-political-leaning-it')
model = BertForSequenceClassification.from_pretrained('MattiaSangermano/bert-political-leaning-it')
tokens = tokenizer("Uguaglianza e giustizia sociale", return_tensors='pt')
logits = model(**tokens)[0]
prediction = model.config.id2label[torch.argmax(logits).item()]
print(prediction)
```
and in TensorFlow:
``` python
from transformers import AutoTokenizer, TFBertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained('MattiaSangermano/bert-political-leaning-it')
model = TFBertForSequenceClassification.from_pretrained('MattiaSangermano/bert-political-leaning-it')
tokens = tokenizer("Ambiente sano, futuro sicuro", padding=True,truncation=True,return_tensors='tf')
logits = model(tokens)[0]
prediction = model.config.id2label[tf.argmax(logits,1)[0].numpy()]
print(prediction)
```
### Out-of-Scope Use
It is important to recognize that political leaning is a personal and complex aspect of an individual's identity and attempting to classify it can be considered unethical and raise significant concerns. Therefore, the model should not be used to identify or classify the political orientation of individual users, nor should it be used for unethical purposes.
## Bias, Risks, and Limitations
During the construction of the dataset, deliberate efforts were made to exclude the names of politicians and political parties. As a result, these specific names might not hold relevance to the model.
## Dataset
We trained the model using the [PoliticIT](https://codalab.lisn.upsaclay.fr/competitions/8507#learn_the_details) competition dataset. The dataset was collected during 2020 and 2022 from the Twitter accounts of Italian politicians. These users were selected because their political affiliation can be guessed according to the party to which politicians belong to. The goal of the task was to classify a cluster of tweets, where a cluster is composed of texts written by different users that share the user self-assigned gender and the political ideology.
### Preprocessing
According to PoliticIT mantainers, from the dataset were discarded those tweets that contain mentions to news sites or some linguistic clues, such as the pipe symbol, which is used commonly by news sites to categorise their news. Moreover, the Twitter mentions were anonymised by replacing them with the token @user. Therefore the text traits cannot be guessed trivially by reading polititian's name and searching information on them on the Internet. Overall, the dataset consists of 103840 tweets.
#### Training Procedure
The Dataset was split into train and validation sets with a stratified split having a ratio of 80-20. Although the main task of the original competition was to classify clusters of tweets this model was trained to predict only the political leaning of individual tweets.
### Training Hyperparameters
- *Optimizer*: **Adam** with learning rate of **4e-5**, epsilon of **1e-7**
- *Loss*: **Categorical Cross Entropy** using **balanced** class weights
- *Max epochs*: **10**
- *Batch size*: **64**
- *Early Stopping*: monitoring validation loss with patience = **3**
- *Training regime*: fp16 mixed precision
## Evaluation
- test **f1-macro**: 61.3
- test **accuracy**: 69.4
| Avg Type | Precision | Recall | F1-score | Accuracy |
| ------ | ------ | ------ | ------ | ------ |
| Macro | 0.67 | 0.61 | 0.61 | - |
| Weighted | 0.74 | 0.69 | 0.77 | 0.69 |
|
TheBloke/Metis-0.4-GPTQ
|
TheBloke
| 2023-12-19T17:34:05Z | 17 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:Mihaiii/Metis-0.4",
"base_model:quantized:Mihaiii/Metis-0.4",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-12-19T17:05:42Z |
---
base_model: Mihaiii/Metis-0.4
inference: false
license: apache-2.0
license_name: apache-2.0
metrics:
- accuracy
model_creator: Mihai
model_name: Metis 0.4
model_type: mistral
prompt_template: '<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Metis 0.4 - GPTQ
- Model creator: [Mihai](https://huggingface.co/Mihaiii)
- Original model: [Metis 0.4](https://huggingface.co/Mihaiii/Metis-0.4)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Mihai's Metis 0.4](https://huggingface.co/Mihaiii/Metis-0.4).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Metis-0.4-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Metis-0.4-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Metis-0.4-GGUF)
* [Mihai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mihaiii/Metis-0.4)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Zephyr
```
<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Metis-0.4-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Metis-0.4-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Metis-0.4-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Metis-0.4-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Metis-0.4-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Metis-0.4-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Metis-0.4-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Metis-0.4-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Metis-0.4-GPTQ`:
```shell
mkdir Metis-0.4-GPTQ
huggingface-cli download TheBloke/Metis-0.4-GPTQ --local-dir Metis-0.4-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Metis-0.4-GPTQ
huggingface-cli download TheBloke/Metis-0.4-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Metis-0.4-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Metis-0.4-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Metis-0.4-GPTQ --local-dir Metis-0.4-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Metis-0.4-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Metis-0.4-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Metis-0.4-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Metis-0.4-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Metis-0.4-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Metis-0.4-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Mihai's Metis 0.4
This is a merge between Metis-0.3 and Metis-0.1 having Metis-0.1 as base.
It was done using [mergekit](https://github.com/cg123/mergekit).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset.
# Prompt Format:
```
<|system|>
{system_message} </s>
<|user|>
{prompt} </s>
<|assistant|>
```
Merge config:
```yaml
slices:
- sources:
- model: Mihaiii/Metis-0.3
layer_range: [0, 32]
- model: Mihaiii/Metis-0.1
layer_range: [0, 32]
merge_method: slerp
base_model: Mihaiii/Metis-0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
|
Shreyasrp/texttosql
|
Shreyasrp
| 2023-12-19T17:31:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-08T12:22:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
# Inference Code
### Install required libraries
```python
!pip install transformers peft
```
### Login
```python
from huggingface_hub import login
token = "Your Key"
login(token)
```
#### Import necessary modules
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from transformers import BitsAndBytesConfig
from peft import prepare_model_for_kbit_training
```
#### Load PEFT model and configuration
```python
config = PeftConfig.from_pretrained("Shreyas45/Llama2_Text-to-SQL_Fintuned")
peft_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
peft_model = PeftModel.from_pretrained(peft_model, "Shreyas45/Llama2_Text-to-SQL_Fintuned")
```
### Load trained model and tokenizer
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import prepare_model_for_kbit_training
trained_model_tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, trust_remote_code=True)
trained_model_tokenizer.pad_token = trained_model_tokenizer.eos_token
```
### Define a SQL query
```python
query = '''In the table named management with columns (department_id VARCHAR, temporary_acting VARCHAR);
CREATE TABLE department (name VARCHAR, num_employees VARCHAR, department_id VARCHAR),
Show the name and number of employees for the departments managed by heads whose temporary acting value is 'Yes'?'''
```
### Construct prompt
```python
prompt = f'''### Instruction: Below is an instruction that describes a task and the schema of the table in the database.
Write a response that generates a request in the form of a SQL query.
Here the schema of the table is mentioned first followed by the question for which the query needs to be generated.
And the question is: {query}
###Output: '''
```
### Tokenize the prompt
```python
encodings = trained_model_tokenizer(prompt, return_tensors='pt')
```
#### Configure generation parameters
```python
generation_config = peft_model.generation_config
generation_config.max_new_token = 1024
generation_config.temperature = 0.7
generation_config.top_p = 0.7
generation_config.num_return_sequence = 1
generation_config.pad_token_id = trained_model_tokenizer.pad_token_id
generation_config.eos_token_id = trained_model_tokenizer.eos_token_id
```
### Generate SQL query using the model
```python
with torch.inference_mode():
outputs = peft_model.generate(
input_ids=encodings.input_ids,
attention_mask=encodings.attention_mask,
generation_config=generation_config,
max_new_tokens=100
)
```
### Decode and print the generated SQL query
```python
generated_query = trained_model_tokenizer.decode(outputs[0])
print("Generated SQL Query:")
print(generated_query)
```
|
digiplay/RunDiffusionFX2.5D_v1_diffusers
|
digiplay
| 2023-12-19T17:24:42Z | 795 | 10 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-03T22:33:29Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info: https://civitai.com/models/82981/rundiffusion-fx-25d
Sample images I made:


|
TheBloke/Llama-2-13B-Chat-Dutch-GPTQ
|
TheBloke
| 2023-12-19T17:17:48Z | 19 | 6 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"lora",
"adapters",
"conversational",
"nl",
"dataset:BramVanroy/dutch_chat_datasets",
"base_model:BramVanroy/Llama-2-13b-chat-dutch",
"base_model:adapter:BramVanroy/Llama-2-13b-chat-dutch",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-09-12T11:31:59Z |
---
language:
- nl
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
- llama
- lora
- adapters
datasets:
- BramVanroy/dutch_chat_datasets
base_model: BramVanroy/Llama-2-13b-chat-dutch
inference: false
model_creator: Bram Vanroy
model_type: llama
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: TheBloke
model-index:
- name: Llama-2-13b-chat-dutch
results: []
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 13B Chat Dutch - GPTQ
- Model creator: [Bram Vanroy](https://huggingface.co/BramVanroy)
- Original model: [Llama 2 13B Chat Dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Bram Vanroy's Llama 2 13B Chat Dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF)
* [Bram Vanroy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-sa-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Bram Vanroy's Llama 2 13B Chat Dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch).
<!-- licensing end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Dolly 15K Dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Dolly 15K Dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Dolly 15K Dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Dolly 15K Dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Dolly 15K Dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Dolly 15K Dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-13B-Chat-Dutch-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-13B-Chat-Dutch-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Llama-2-13B-Chat-Dutch-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-13B-Chat-Dutch-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Llama-2-13B-Chat-Dutch-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Bram Vanroy's Llama 2 13B Chat Dutch
# Llama-2-13b-chat-dutch
This model is a fine-tuned version of [BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny](https://huggingface.co/BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny)
on the [BramVanroy/dutch_chat_datasets](https://huggingface.co/datasets/BramVanroy/dutch_chat_datasets) dataset on a context of 4096 tokens.
See the original [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) for more information, intended use, and biases.
If you use this model or refer to it, please use the following citation:
Bram Vanroy. (2023). Llama v2 13b: Finetuned on Dutch Conversational Data. Hugging Face. https://doi.org/10.57967/HF/1018
```bibtex
@misc{https://doi.org/10.57967/hf/1018,
doi = {10.57967/HF/1018},
url = {https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch},
author = {{Bram Vanroy}},
title = {{Llama} v2 13b: {Finetuned} on {Dutch} Conversational Data},
publisher = {{Hugging} {Face}},
year = {2023}
}
```
## Model description
I could not get the original Llama 2 13B to produce much Dutch, even though the description paper indicates that it was trained on a (small) portion of Dutch data. I therefore
continued training the original Llama 2 13B checkpoint on Dutch data [in regular CLM](https://huggingface.co/BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny). In a second
step I finetuned that model on a collection of synthetic (translated) instruction and chat datasets that I have [collected](https://huggingface.co/datasets/BramVanroy/dutch_chat_datasets).
See their pages for licensing, usage, creation, and citation information.
- https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch
- https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch-baize
- https://huggingface.co/datasets/BramVanroy/stackoverflow-chat-dutch
- https://huggingface.co/datasets/BramVanroy/quora-chat-dutch
This model is the result of that process. While not perfect by any means, it can perform reasonably well in Dutch depending on the prompts. It is also decent at helping with programming tasks.
## Intended uses & limitations
Depending on the prompt, the model can return good results considering that it is only 13B in size and was only marginally pretrained on Dutch. That being said, the
model was not trained on human feedback and contains no safe-guards so it may produce unexpected and even offensive content depending on the query. The only attempt
of a safe-guard is the default prompt that it was trained on, which was
> Je bent een behulpzame, respectvolle en eerlijke assistent. Antwoord altijd zo behulpzaam mogelijk. Je antwoorden mogen geen schadelijke, onethische, racistische, seksistische, gevaarlijke of illegale inhoud bevatten. Zorg ervoor dat je antwoorden sociaal onbevooroordeeld en positief van aard zijn.\n\nAls een vraag nergens op slaat of feitelijk niet coherent is, leg dan uit waarom in plaats van iets niet correct te antwoorden. Als je het antwoord op een vraag niet weet, deel dan geen onjuiste informatie.\
Use with caution and at your own risk!
Because the model was trained on synthetic data, translated with OpenAI's API, you cannot use this model to create a competitive product to theirs.
## Training procedure
Trained with 4096 tokens context length. The dataset was preprocessed so that as many as possible dialogs were put in a single batch, without disrupting
dialogs. In other words, a dialog was never split up over different sequences or batches. During training, the human prompts were ignored in back propagation.
Trained with LoRA targetting ["q_proj", "v_proj"] in 4 bit and merged before upload. Trained with Flash Attention as borrowed from [here](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/utils/llama_patch.py).
The adapters are in the `adapters` branch.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0193 | 0.09 | 20 | 1.1583 |
| 0.9743 | 0.17 | 40 | 1.1339 |
| 0.9159 | 0.26 | 60 | 1.1218 |
| 0.9131 | 0.35 | 80 | 1.1153 |
| 0.8816 | 0.44 | 100 | 1.1130 |
| 0.8977 | 0.52 | 120 | 1.1069 |
| 0.9061 | 0.61 | 140 | 1.1025 |
| 0.8672 | 0.7 | 160 | 1.1024 |
| 0.8956 | 0.79 | 180 | 1.0971 |
| 0.8514 | 0.87 | 200 | 1.0995 |
| 0.8357 | 0.96 | 220 | 1.0952 |
| 0.8294 | 1.05 | 240 | 1.0964 |
| 0.8531 | 1.13 | 260 | 1.0947 |
| 0.8321 | 1.22 | 280 | 1.0951 |
| 0.8365 | 1.31 | 300 | 1.0910 |
| 0.8616 | 1.4 | 320 | 1.0894 |
| 0.8397 | 1.48 | 340 | 1.0904 |
| 0.861 | 1.57 | 360 | 1.0880 |
| 0.8116 | 1.66 | 380 | 1.0871 |
| 0.8285 | 1.74 | 400 | 1.0855 |
| 0.8603 | 1.83 | 420 | 1.0856 |
| 0.8126 | 1.92 | 440 | 1.0848 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
greatakela/debtest-trainer
|
greatakela
| 2023-12-19T17:16:20Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-base",
"base_model:finetune:microsoft/deberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-19T17:15:35Z |
---
license: mit
base_model: microsoft/deberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: debtest-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debtest-trainer
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7043
- Accuracy: 0.4995
- F1: 0.6662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.6911 | 1.0 | 6787 | 0.7315 | 0.4995 | 0.6662 |
| 0.689 | 2.0 | 13574 | 0.7055 | 0.4995 | 0.6662 |
| 0.6868 | 3.0 | 20361 | 0.7043 | 0.4995 | 0.6662 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
TheBloke/Llama-2-13B-Chat-Dutch-AWQ
|
TheBloke
| 2023-12-19T17:15:58Z | 22 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"lora",
"adapters",
"conversational",
"nl",
"dataset:BramVanroy/dutch_chat_datasets",
"base_model:BramVanroy/Llama-2-13b-chat-dutch",
"base_model:adapter:BramVanroy/Llama-2-13b-chat-dutch",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-09-19T04:51:55Z |
---
language:
- nl
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
- llama
- lora
- adapters
datasets:
- BramVanroy/dutch_chat_datasets
base_model: BramVanroy/Llama-2-13b-chat-dutch
inference: false
model_creator: Bram Vanroy
model_type: llama
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: TheBloke
model-index:
- name: Llama-2-13b-chat-dutch
results: []
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 13B Chat Dutch - AWQ
- Model creator: [Bram Vanroy](https://huggingface.co/BramVanroy)
- Original model: [Llama 2 13B Chat Dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch)
<!-- description start -->
## Description
This repo contains AWQ model files for [Bram Vanroy's Llama 2 13B Chat Dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF)
* [Bram Vanroy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-sa-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Bram Vanroy's Llama 2 13B Chat Dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch).
<!-- licensing end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-AWQ/tree/main) | 4 | 128 | [Dolly 15K Dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-13B-Chat-Dutch-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Llama-2-13B-Chat-Dutch-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Llama-2-13B-Chat-Dutch-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Bram Vanroy's Llama 2 13B Chat Dutch
# Llama-2-13b-chat-dutch
This model is a fine-tuned version of [BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny](https://huggingface.co/BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny)
on the [BramVanroy/dutch_chat_datasets](https://huggingface.co/datasets/BramVanroy/dutch_chat_datasets) dataset on a context of 4096 tokens.
See the original [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) for more information, intended use, and biases.
If you use this model or refer to it, please use the following citation:
Bram Vanroy. (2023). Llama v2 13b: Finetuned on Dutch Conversational Data. Hugging Face. https://doi.org/10.57967/HF/1018
```bibtex
@misc{https://doi.org/10.57967/hf/1018,
doi = {10.57967/HF/1018},
url = {https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch},
author = {{Bram Vanroy}},
title = {{Llama} v2 13b: {Finetuned} on {Dutch} Conversational Data},
publisher = {{Hugging} {Face}},
year = {2023}
}
```
## Model description
I could not get the original Llama 2 13B to produce much Dutch, even though the description paper indicates that it was trained on a (small) portion of Dutch data. I therefore
continued training the original Llama 2 13B checkpoint on Dutch data [in regular CLM](https://huggingface.co/BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny). In a second
step I finetuned that model on a collection of synthetic (translated) instruction and chat datasets that I have [collected](https://huggingface.co/datasets/BramVanroy/dutch_chat_datasets).
See their pages for licensing, usage, creation, and citation information.
- https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch
- https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch-baize
- https://huggingface.co/datasets/BramVanroy/stackoverflow-chat-dutch
- https://huggingface.co/datasets/BramVanroy/quora-chat-dutch
This model is the result of that process. While not perfect by any means, it can perform reasonably well in Dutch depending on the prompts. It is also decent at helping with programming tasks.
## Intended uses & limitations
Depending on the prompt, the model can return good results considering that it is only 13B in size and was only marginally pretrained on Dutch. That being said, the
model was not trained on human feedback and contains no safe-guards so it may produce unexpected and even offensive content depending on the query. The only attempt
of a safe-guard is the default prompt that it was trained on, which was
> Je bent een behulpzame, respectvolle en eerlijke assistent. Antwoord altijd zo behulpzaam mogelijk. Je antwoorden mogen geen schadelijke, onethische, racistische, seksistische, gevaarlijke of illegale inhoud bevatten. Zorg ervoor dat je antwoorden sociaal onbevooroordeeld en positief van aard zijn.\n\nAls een vraag nergens op slaat of feitelijk niet coherent is, leg dan uit waarom in plaats van iets niet correct te antwoorden. Als je het antwoord op een vraag niet weet, deel dan geen onjuiste informatie.\
Use with caution and at your own risk!
Because the model was trained on synthetic data, translated with OpenAI's API, you cannot use this model to create a competitive product to theirs.
## Training procedure
Trained with 4096 tokens context length. The dataset was preprocessed so that as many as possible dialogs were put in a single batch, without disrupting
dialogs. In other words, a dialog was never split up over different sequences or batches. During training, the human prompts were ignored in back propagation.
Trained with LoRA targetting ["q_proj", "v_proj"] in 4 bit and merged before upload. Trained with Flash Attention as borrowed from [here](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/utils/llama_patch.py).
The adapters are in the `adapters` branch.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0193 | 0.09 | 20 | 1.1583 |
| 0.9743 | 0.17 | 40 | 1.1339 |
| 0.9159 | 0.26 | 60 | 1.1218 |
| 0.9131 | 0.35 | 80 | 1.1153 |
| 0.8816 | 0.44 | 100 | 1.1130 |
| 0.8977 | 0.52 | 120 | 1.1069 |
| 0.9061 | 0.61 | 140 | 1.1025 |
| 0.8672 | 0.7 | 160 | 1.1024 |
| 0.8956 | 0.79 | 180 | 1.0971 |
| 0.8514 | 0.87 | 200 | 1.0995 |
| 0.8357 | 0.96 | 220 | 1.0952 |
| 0.8294 | 1.05 | 240 | 1.0964 |
| 0.8531 | 1.13 | 260 | 1.0947 |
| 0.8321 | 1.22 | 280 | 1.0951 |
| 0.8365 | 1.31 | 300 | 1.0910 |
| 0.8616 | 1.4 | 320 | 1.0894 |
| 0.8397 | 1.48 | 340 | 1.0904 |
| 0.861 | 1.57 | 360 | 1.0880 |
| 0.8116 | 1.66 | 380 | 1.0871 |
| 0.8285 | 1.74 | 400 | 1.0855 |
| 0.8603 | 1.83 | 420 | 1.0856 |
| 0.8126 | 1.92 | 440 | 1.0848 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
andrijdavid/phi-2-GGUF
|
andrijdavid
| 2023-12-19T17:14:25Z | 90 | 0 | null |
[
"gguf",
"nlp",
"code",
"text-generation",
"en",
"base_model:microsoft/phi-2",
"base_model:quantized:microsoft/phi-2",
"license:other",
"region:us"
] |
text-generation
| 2023-12-19T16:47:22Z |
---
inference: false
base_model: microsoft/phi-2
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
quantized_by: andrijdavid
model_name: Phi 2
model_type: phi-msft
tags:
- nlp
- code
---
This repository contains GGUF format model files for [Microsoft's Phi 2](https://huggingface.co/microsoft/phi-2).
## Model Summary
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
This hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
### QA Format:
You can provide the prompt as a standalone question as follows:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
```
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
```markdown
Instruct: Write a detailed analogy between mathematics and a lighthouse.
Output: Mathematics is like a lighthouse. Just as a lighthouse guides ships safely to shore, mathematics provides a guiding light in the world of numbers and logic. It helps us navigate through complex problems and find solutions. Just as a lighthouse emits a steady beam of light, mathematics provides a consistent framework for reasoning and problem-solving. It illuminates the path to understanding and helps us make sense of the world around us.
```
where the model generates the text after "Output:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Well, have you tried creating a study schedule and sticking to it?
Alice: Yes, I have, but it doesn't seem to help much.
Bob: Hmm, maybe you should try studying in a quiet environment, like the library.
Alice: ...
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-2 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks is out of the scope of this research project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers>=4.36.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
|
hkivancoral/smids_5x_deit_small_sgd_0001_fold5
|
hkivancoral
| 2023-12-19T17:13:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T15:33:59Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_small_sgd_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8116666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_small_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4899
- Accuracy: 0.8117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0575 | 1.0 | 375 | 1.0409 | 0.4667 |
| 0.9896 | 2.0 | 750 | 1.0031 | 0.5117 |
| 0.9428 | 3.0 | 1125 | 0.9645 | 0.5567 |
| 0.9186 | 4.0 | 1500 | 0.9265 | 0.615 |
| 0.8922 | 5.0 | 1875 | 0.8895 | 0.6483 |
| 0.8541 | 6.0 | 2250 | 0.8539 | 0.6717 |
| 0.7885 | 7.0 | 2625 | 0.8194 | 0.69 |
| 0.7714 | 8.0 | 3000 | 0.7879 | 0.705 |
| 0.758 | 9.0 | 3375 | 0.7592 | 0.7133 |
| 0.7212 | 10.0 | 3750 | 0.7334 | 0.7217 |
| 0.6793 | 11.0 | 4125 | 0.7102 | 0.7333 |
| 0.6484 | 12.0 | 4500 | 0.6895 | 0.7367 |
| 0.6765 | 13.0 | 4875 | 0.6713 | 0.7467 |
| 0.664 | 14.0 | 5250 | 0.6548 | 0.7533 |
| 0.6332 | 15.0 | 5625 | 0.6395 | 0.7617 |
| 0.5983 | 16.0 | 6000 | 0.6261 | 0.77 |
| 0.6122 | 17.0 | 6375 | 0.6142 | 0.77 |
| 0.5912 | 18.0 | 6750 | 0.6024 | 0.7733 |
| 0.5764 | 19.0 | 7125 | 0.5918 | 0.775 |
| 0.5461 | 20.0 | 7500 | 0.5824 | 0.7783 |
| 0.5245 | 21.0 | 7875 | 0.5733 | 0.7833 |
| 0.5339 | 22.0 | 8250 | 0.5654 | 0.7867 |
| 0.5651 | 23.0 | 8625 | 0.5584 | 0.7867 |
| 0.5365 | 24.0 | 9000 | 0.5518 | 0.7933 |
| 0.4982 | 25.0 | 9375 | 0.5457 | 0.795 |
| 0.5274 | 26.0 | 9750 | 0.5402 | 0.7933 |
| 0.5167 | 27.0 | 10125 | 0.5353 | 0.795 |
| 0.53 | 28.0 | 10500 | 0.5303 | 0.7967 |
| 0.5404 | 29.0 | 10875 | 0.5260 | 0.7967 |
| 0.4414 | 30.0 | 11250 | 0.5222 | 0.8017 |
| 0.5269 | 31.0 | 11625 | 0.5183 | 0.8017 |
| 0.5299 | 32.0 | 12000 | 0.5150 | 0.8017 |
| 0.5311 | 33.0 | 12375 | 0.5120 | 0.8033 |
| 0.499 | 34.0 | 12750 | 0.5091 | 0.8033 |
| 0.4712 | 35.0 | 13125 | 0.5065 | 0.8033 |
| 0.4169 | 36.0 | 13500 | 0.5042 | 0.8017 |
| 0.4803 | 37.0 | 13875 | 0.5020 | 0.8017 |
| 0.4796 | 38.0 | 14250 | 0.5001 | 0.805 |
| 0.4865 | 39.0 | 14625 | 0.4984 | 0.8067 |
| 0.5122 | 40.0 | 15000 | 0.4967 | 0.8083 |
| 0.4785 | 41.0 | 15375 | 0.4953 | 0.8067 |
| 0.4562 | 42.0 | 15750 | 0.4941 | 0.8083 |
| 0.5248 | 43.0 | 16125 | 0.4930 | 0.8117 |
| 0.4817 | 44.0 | 16500 | 0.4922 | 0.8117 |
| 0.4662 | 45.0 | 16875 | 0.4914 | 0.8117 |
| 0.4968 | 46.0 | 17250 | 0.4908 | 0.8117 |
| 0.5157 | 47.0 | 17625 | 0.4904 | 0.8117 |
| 0.4378 | 48.0 | 18000 | 0.4901 | 0.8117 |
| 0.4668 | 49.0 | 18375 | 0.4899 | 0.8117 |
| 0.4722 | 50.0 | 18750 | 0.4899 | 0.8117 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/smids_10x_deit_small_sgd_00001_fold4
|
hkivancoral
| 2023-12-19T17:07:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T16:12:10Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_00001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_00001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9309
- Accuracy: 0.5667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.049 | 1.0 | 750 | 1.0657 | 0.4217 |
| 1.0271 | 2.0 | 1500 | 1.0619 | 0.4233 |
| 1.0309 | 3.0 | 2250 | 1.0577 | 0.4233 |
| 1.0685 | 4.0 | 3000 | 1.0531 | 0.4233 |
| 1.0213 | 5.0 | 3750 | 1.0484 | 0.425 |
| 1.0075 | 6.0 | 4500 | 1.0438 | 0.4267 |
| 1.0135 | 7.0 | 5250 | 1.0390 | 0.4283 |
| 1.0193 | 8.0 | 6000 | 1.0343 | 0.43 |
| 1.0172 | 9.0 | 6750 | 1.0296 | 0.4383 |
| 0.995 | 10.0 | 7500 | 1.0249 | 0.4417 |
| 0.9861 | 11.0 | 8250 | 1.0204 | 0.4467 |
| 0.9925 | 12.0 | 9000 | 1.0158 | 0.4533 |
| 0.9841 | 13.0 | 9750 | 1.0115 | 0.465 |
| 0.9738 | 14.0 | 10500 | 1.0072 | 0.4733 |
| 0.9779 | 15.0 | 11250 | 1.0030 | 0.4783 |
| 0.9393 | 16.0 | 12000 | 0.9988 | 0.485 |
| 0.968 | 17.0 | 12750 | 0.9949 | 0.485 |
| 0.9542 | 18.0 | 13500 | 0.9909 | 0.4883 |
| 0.9456 | 19.0 | 14250 | 0.9871 | 0.4917 |
| 0.9805 | 20.0 | 15000 | 0.9834 | 0.4967 |
| 0.9272 | 21.0 | 15750 | 0.9798 | 0.5 |
| 0.9402 | 22.0 | 16500 | 0.9763 | 0.5083 |
| 0.9463 | 23.0 | 17250 | 0.9729 | 0.5133 |
| 0.9349 | 24.0 | 18000 | 0.9697 | 0.515 |
| 0.9212 | 25.0 | 18750 | 0.9666 | 0.5167 |
| 0.9115 | 26.0 | 19500 | 0.9636 | 0.5183 |
| 0.9201 | 27.0 | 20250 | 0.9607 | 0.5217 |
| 0.9475 | 28.0 | 21000 | 0.9580 | 0.525 |
| 0.9135 | 29.0 | 21750 | 0.9554 | 0.5267 |
| 0.9341 | 30.0 | 22500 | 0.9529 | 0.53 |
| 0.9173 | 31.0 | 23250 | 0.9505 | 0.5317 |
| 0.9276 | 32.0 | 24000 | 0.9483 | 0.535 |
| 0.9211 | 33.0 | 24750 | 0.9462 | 0.5417 |
| 0.9232 | 34.0 | 25500 | 0.9443 | 0.5467 |
| 0.9171 | 35.0 | 26250 | 0.9425 | 0.5483 |
| 0.9007 | 36.0 | 27000 | 0.9408 | 0.5483 |
| 0.9143 | 37.0 | 27750 | 0.9393 | 0.555 |
| 0.8916 | 38.0 | 28500 | 0.9379 | 0.5567 |
| 0.8951 | 39.0 | 29250 | 0.9366 | 0.5567 |
| 0.9014 | 40.0 | 30000 | 0.9355 | 0.5567 |
| 0.8889 | 41.0 | 30750 | 0.9345 | 0.5583 |
| 0.8953 | 42.0 | 31500 | 0.9336 | 0.5583 |
| 0.9154 | 43.0 | 32250 | 0.9329 | 0.5583 |
| 0.8836 | 44.0 | 33000 | 0.9323 | 0.5633 |
| 0.8961 | 45.0 | 33750 | 0.9318 | 0.5667 |
| 0.8837 | 46.0 | 34500 | 0.9314 | 0.5667 |
| 0.8621 | 47.0 | 35250 | 0.9311 | 0.5667 |
| 0.8982 | 48.0 | 36000 | 0.9310 | 0.5667 |
| 0.8793 | 49.0 | 36750 | 0.9309 | 0.5667 |
| 0.8813 | 50.0 | 37500 | 0.9309 | 0.5667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/smids_10x_deit_small_sgd_0001_fold4
|
hkivancoral
| 2023-12-19T17:06:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T16:11:02Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8416666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4063
- Accuracy: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9715 | 1.0 | 750 | 1.0172 | 0.455 |
| 0.9076 | 2.0 | 1500 | 0.9524 | 0.5267 |
| 0.8403 | 3.0 | 2250 | 0.8812 | 0.625 |
| 0.7987 | 4.0 | 3000 | 0.8125 | 0.6817 |
| 0.7256 | 5.0 | 3750 | 0.7521 | 0.7183 |
| 0.6364 | 6.0 | 4500 | 0.7018 | 0.7483 |
| 0.5752 | 7.0 | 5250 | 0.6571 | 0.775 |
| 0.63 | 8.0 | 6000 | 0.6211 | 0.7817 |
| 0.6197 | 9.0 | 6750 | 0.5901 | 0.79 |
| 0.5118 | 10.0 | 7500 | 0.5651 | 0.7983 |
| 0.5006 | 11.0 | 8250 | 0.5449 | 0.8017 |
| 0.5617 | 12.0 | 9000 | 0.5276 | 0.8033 |
| 0.4842 | 13.0 | 9750 | 0.5134 | 0.8083 |
| 0.5031 | 14.0 | 10500 | 0.5016 | 0.81 |
| 0.4417 | 15.0 | 11250 | 0.4908 | 0.8083 |
| 0.4457 | 16.0 | 12000 | 0.4818 | 0.8083 |
| 0.3768 | 17.0 | 12750 | 0.4743 | 0.8117 |
| 0.4232 | 18.0 | 13500 | 0.4671 | 0.8167 |
| 0.4491 | 19.0 | 14250 | 0.4614 | 0.8167 |
| 0.4472 | 20.0 | 15000 | 0.4557 | 0.8233 |
| 0.3954 | 21.0 | 15750 | 0.4506 | 0.8267 |
| 0.405 | 22.0 | 16500 | 0.4463 | 0.83 |
| 0.4169 | 23.0 | 17250 | 0.4425 | 0.8317 |
| 0.4563 | 24.0 | 18000 | 0.4389 | 0.8333 |
| 0.3987 | 25.0 | 18750 | 0.4356 | 0.8333 |
| 0.39 | 26.0 | 19500 | 0.4325 | 0.8317 |
| 0.4056 | 27.0 | 20250 | 0.4297 | 0.8317 |
| 0.3872 | 28.0 | 21000 | 0.4272 | 0.8317 |
| 0.3817 | 29.0 | 21750 | 0.4249 | 0.835 |
| 0.4035 | 30.0 | 22500 | 0.4229 | 0.8367 |
| 0.3636 | 31.0 | 23250 | 0.4211 | 0.835 |
| 0.4122 | 32.0 | 24000 | 0.4193 | 0.8367 |
| 0.3917 | 33.0 | 24750 | 0.4176 | 0.8383 |
| 0.3839 | 34.0 | 25500 | 0.4161 | 0.84 |
| 0.3217 | 35.0 | 26250 | 0.4147 | 0.84 |
| 0.3641 | 36.0 | 27000 | 0.4136 | 0.84 |
| 0.3379 | 37.0 | 27750 | 0.4124 | 0.84 |
| 0.3959 | 38.0 | 28500 | 0.4115 | 0.84 |
| 0.3972 | 39.0 | 29250 | 0.4106 | 0.84 |
| 0.3899 | 40.0 | 30000 | 0.4098 | 0.84 |
| 0.3662 | 41.0 | 30750 | 0.4090 | 0.84 |
| 0.3473 | 42.0 | 31500 | 0.4084 | 0.8417 |
| 0.3905 | 43.0 | 32250 | 0.4078 | 0.8417 |
| 0.3794 | 44.0 | 33000 | 0.4074 | 0.8417 |
| 0.3783 | 45.0 | 33750 | 0.4070 | 0.8417 |
| 0.3309 | 46.0 | 34500 | 0.4067 | 0.8417 |
| 0.3086 | 47.0 | 35250 | 0.4065 | 0.8417 |
| 0.3454 | 48.0 | 36000 | 0.4063 | 0.8417 |
| 0.3559 | 49.0 | 36750 | 0.4063 | 0.8417 |
| 0.323 | 50.0 | 37500 | 0.4063 | 0.8417 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
R136a1/Frostwind-10.7B-v1-exl2
|
R136a1
| 2023-12-19T16:54:08Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-19T16:46:02Z |
---
license: cc-by-nc-4.0
language:
- en
---
### 8bpw 8h
Frostwind-v1

A finetune of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
<br>Took Roughly 3 Hours with 4x 4090s, over 2 Epochs, with around 52K varied samples.
Dataset Composition:
<br>20% - Coding
<br>30% - Instruct
<br>30% - Generalised Data
<br>10% - Roleplay
<br>10% - Dealignment
***
Testing Notes:
Fairly smart, as I expected. Obviously not at the level of the bigger models, but I did not expect that level from this.
Could be sampler issues, but generally I needed 1/2 swipes to get the correct answer when doing Zero context tests. If context is filled, no issues on my end.
For Roleplays: adding things like avoid writing as {{user}} suprisingly helps. Plus a proper prompt of course. I liked the writing style. Handles group characters in 1 card well, during my tests.
Fairly uncensored *during roleplay.* Yeah the as an AI stuff can happen at Zero context, but I have no issues once a character card is introduced. I had no issues making outputs that would give me 2500 Life Sentences if posted here.
***
Trained with Alpaca Format:
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br>wandb:
<br>wandb: Run history:
<br>wandb: eval/loss █▃▂▂▂▂▂▁▁▁▁▂▂▂▂▂▂▁▁▁
<br>wandb: eval/runtime ▃▂▃▂▃▂▂▃▁▃█▂▃▃▃▂▃▃▂▂
<br>wandb: eval/samples_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇
<br>wandb: eval/steps_per_second ▆▇▆▇▆▇▇▆█▆▁▇▆▆▆▇▆▆▇▇
<br>wandb: train/epoch ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
<br>wandb: train/global_step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
<br>wandb: train/learning_rate ▄███████▇▇▇▇▇▆▆▆▆▅▅▅▅▄▄▄▃▃▃▃▂▂▂▂▂▁▁▁▁▁▁▁
<br>wandb: train/loss █▅▅▆▅▅▄▄▄▆▆▅▆▆▆▅▄▆▅▅▅▆▄▄▃▄▃▃▂▃▄▂▂▃▃▂▁▂▂▂
<br>wandb:
<br>wandb: Run summary:
<br>wandb: eval/loss 0.74622
<br>wandb: eval/runtime 72.5049
<br>wandb: eval/samples_per_second 37.239
<br>wandb: eval/steps_per_second 2.331
<br>wandb: train/epoch 1.98
<br>wandb: train/global_step 410
<br>wandb: train/learning_rate 0.0
<br>wandb: train/loss 0.6457
<br>wandb: train/total_flos 3.4382652340646707e+18
<br>wandb: train/train_loss 0.70204
<br>wandb: train/train_runtime 10880.917
<br>wandb: train/train_samples_per_second 9.417
<br>wandb: train/train_steps_per_second 0.038
<br>wandb:
|
TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-AWQ
|
TheBloke
| 2023-12-19T16:46:54Z | 114 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored",
"base_model:quantized:w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-12-19T16:20:38Z |
---
base_model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
inference: false
license: apache-2.0
model_creator: Stepan Zuev
model_name: Solar 10.7B Instruct V1.0 Uncensored
model_type: solar
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Solar 10.7B Instruct V1.0 Uncensored - AWQ
- Model creator: [Stepan Zuev](https://huggingface.co/w4r10ck)
- Original model: [Solar 10.7B Instruct V1.0 Uncensored](https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored)
<!-- description start -->
## Description
This repo contains AWQ model files for [Stepan Zuev's Solar 10.7B Instruct V1.0 Uncensored](https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GGUF)
* [Stepan Zuev's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.96 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `SOLAR-10.7B-Instruct-v1.0-uncensored-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''{prompt}
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Stepan Zuev's Solar 10.7B Instruct V1.0 Uncensored
[upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) finetuned on [unalignment/toxic-dpo-v0.1](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
|
ntc-ai/SDXL-LoRA-slider.distinct-in-focus
|
ntc-ai
| 2023-12-19T16:36:17Z | 37 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-19T16:36:14Z |
---
language:
- en
thumbnail: "images/evaluate/distinct, in-focus...blurry/distinct, in-focus_17_3.0.png"
widget:
- text: distinct, in-focus
output:
url: images/distinct, in-focus_17_3.0.png
- text: distinct, in-focus
output:
url: images/distinct, in-focus_19_3.0.png
- text: distinct, in-focus
output:
url: images/distinct, in-focus_20_3.0.png
- text: distinct, in-focus
output:
url: images/distinct, in-focus_21_3.0.png
- text: distinct, in-focus
output:
url: images/distinct, in-focus_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "distinct, in-focus"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - distinct, in-focus (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/distinct, in-focus_17_-3.0.png" width=256 height=256 /> | <img src="images/distinct, in-focus_17_0.0.png" width=256 height=256 /> | <img src="images/distinct, in-focus_17_3.0.png" width=256 height=256 /> |
| <img src="images/distinct, in-focus_19_-3.0.png" width=256 height=256 /> | <img src="images/distinct, in-focus_19_0.0.png" width=256 height=256 /> | <img src="images/distinct, in-focus_19_3.0.png" width=256 height=256 /> |
| <img src="images/distinct, in-focus_20_-3.0.png" width=256 height=256 /> | <img src="images/distinct, in-focus_20_0.0.png" width=256 height=256 /> | <img src="images/distinct, in-focus_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
distinct, in-focus
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.distinct-in-focus', weight_name='distinct, in-focus.safetensors', adapter_name="distinct, in-focus")
# Activate the LoRA
pipe.set_adapters(["distinct, in-focus"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, distinct, in-focus"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 480+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
bdsqlsz/dpo-sd-text2image-v1-fp16
|
bdsqlsz
| 2023-12-19T16:33:17Z | 0 | 24 | null |
[
"dataset:yuvalkirstain/pickapic_v2",
"arxiv:2311.12908",
"region:us"
] | null | 2023-12-19T15:33:00Z |
---
datasets:
- yuvalkirstain/pickapic_v2
---
# Diffusion Model Alignment Using Direct Preference Optimization
Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
SD1.5 model is fine-tuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on offline human preference data [pickapic_v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2).
SDXL model is fine-tuned from [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) on offline human preference data [pickapic_v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2).
|
distilbert/distilbert-base-uncased-finetuned-sst-2-english
|
distilbert
| 2023-12-19T16:29:37Z | 6,457,837 | 683 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"arxiv:1910.01108",
"doi:10.57967/hf/0181",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: en
license: apache-2.0
datasets:
- sst2
- glue
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
- type: accuracy
value: 0.9105504587155964
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2YyOGMxYjY2Y2JhMjkxNjIzN2FmMjNiNmM2ZWViNGY3MTNmNWI2YzhiYjYxZTY0ZGUyN2M1NGIxZjRiMjQwZiIsInZlcnNpb24iOjF9.uui0srxV5ZHRhxbYN6082EZdwpnBgubPJ5R2-Wk8HTWqmxYE3QHidevR9LLAhidqGw6Ih93fK0goAXncld_gBg
- type: precision
value: 0.8978260869565218
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzgwYTYwYjA2MmM0ZTYwNDk0M2NmNTBkZmM2NGNhYzQ1OGEyN2NkNDQ3Mzc2NTQyMmZiNDJiNzBhNGVhZGUyOSIsInZlcnNpb24iOjF9.eHjLmw3K02OU69R2Au8eyuSqT3aBDHgZCn8jSzE3_urD6EUSSsLxUpiAYR4BGLD_U6-ZKcdxVo_A2rdXqvUJDA
- type: recall
value: 0.9301801801801802
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGIzM2E3MTI2Mzc2MDYwNmU3ZTVjYmZmZDBkNjY4ZTc5MGY0Y2FkNDU3NjY1MmVkNmE3Y2QzMzAwZDZhOWY1NiIsInZlcnNpb24iOjF9.PUZlqmct13-rJWBXdHm5tdkXgETL9F82GNbbSR4hI8MB-v39KrK59cqzFC2Ac7kJe_DtOeUyosj34O_mFt_1DQ
- type: auc
value: 0.9716626673402374
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDM0YWIwZmQ4YjUwOGZmMWU2MjI1YjIxZGQ2MzNjMzRmZmYxMzZkNGFjODhlMDcyZDM1Y2RkMWZlOWQ0MWYwNSIsInZlcnNpb24iOjF9.E7GRlAXmmpEkTHlXheVkuL1W4WNjv4JO3qY_WCVsTVKiO7bUu0UVjPIyQ6g-J1OxsfqZmW3Leli1wY8vPBNNCQ
- type: f1
value: 0.9137168141592922
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGU4MjNmOGYwZjZjMDQ1ZTkyZTA4YTc1MWYwOTM0NDM4ZWY1ZGVkNDY5MzNhYTQyZGFlNzIyZmUwMDg3NDU0NyIsInZlcnNpb24iOjF9.mW5ftkq50Se58M-jm6a2Pu93QeKa3MfV7xcBwvG3PSB_KNJxZWTCpfMQp-Cmx_EMlmI2siKOyd8akYjJUrzJCA
- type: loss
value: 0.39013850688934326
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTZiNzAyZDc0MzUzMmE1MGJiN2JlYzFiODE5ZTNlNGE4MmI4YzRiMTc2ODEzMTUwZmEzOTgxNzc4YjJjZTRmNiIsInZlcnNpb24iOjF9.VqIC7uYC-ZZ8ss9zQOlRV39YVOOLc5R36sIzCcVz8lolh61ux_5djm2XjpP6ARc6KqEnXC4ZtfNXsX2HZfrtCQ
- task:
type: text-classification
name: Text Classification
dataset:
name: sst2
type: sst2
config: default
split: train
metrics:
- type: accuracy
value: 0.9885521685548412
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2I3NzU3YzhmMDkxZTViY2M3OTY1NmI0ZTdmMDQxNjNjYzJiZmQxNzczM2E4YmExYTY5ODY0NDBkY2I4ZjNkOCIsInZlcnNpb24iOjF9.4Gtk3FeVc9sPWSqZIaeUXJ9oVlPzm-NmujnWpK2y5s1Vhp1l6Y1pK5_78wW0-NxSvQqV6qd5KQf_OAEpVAkQDA
- type: precision
value: 0.9881965062029833
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDdlZDMzY2I3MTAwYTljNmM4MGMyMzU2YjAzZDg1NDYwN2ZmM2Y5OWZhMjUyMGJiNjY1YmZiMzFhMDI2ODFhNyIsInZlcnNpb24iOjF9.cqmv6yBxu4St2mykRWrZ07tDsiSLdtLTz2hbqQ7Gm1rMzq9tdlkZ8MyJRxtME_Y8UaOG9rs68pV-gKVUs8wABw
- type: precision
value: 0.9885521685548412
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjFlYzAzNmE1YjljNjUwNzBjZjEzZDY0ZDQyMmY5ZWM2OTBhNzNjYjYzYTk1YWE1NjU3YTMxZDQwOTE1Y2FkNyIsInZlcnNpb24iOjF9.jnCHOkUHuAOZZ_ZMVOnetx__OVJCS6LOno4caWECAmfrUaIPnPNV9iJ6izRO3sqkHRmxYpWBb-27GJ4N3LU-BQ
- type: precision
value: 0.9885639626373408
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGUyODFjNjBlNTE2MTY3ZDAxOGU1N2U0YjUyY2NiZjhkOGVmYThjYjBkNGU3NTRkYzkzNDQ2MmMwMjkwMWNiMyIsInZlcnNpb24iOjF9.zTNabMwApiZyXdr76QUn7WgGB7D7lP-iqS3bn35piqVTNsv3wnKjZOaKFVLIUvtBXq4gKw7N2oWxvWc4OcSNDg
- type: recall
value: 0.9886145346602994
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTU1YjlhODU3YTkyNTdiZDcwZGFlZDBiYjY0N2NjMGM2NTRiNjQ3MDNjNGMxOWY2ZGQ4NWU1YmMzY2UwZTI3YSIsInZlcnNpb24iOjF9.xaLPY7U-wHsJ3DDui1yyyM-xWjL0Jz5puRThy7fczal9x05eKEQ9s0a_WD-iLmapvJs0caXpV70hDe2NLcs-DA
- type: recall
value: 0.9885521685548412
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODE0YTU0MDBlOGY4YzU0MjY5MzA3OTk2OGNhOGVkMmU5OGRjZmFiZWI2ZjY5ODEzZTQzMTI0N2NiOTVkNDliYiIsInZlcnNpb24iOjF9.SOt1baTBbuZRrsvGcak2sUwoTrQzmNCbyV2m1_yjGsU48SBH0NcKXicidNBSnJ6ihM5jf_Lv_B5_eOBkLfNWDQ
- type: recall
value: 0.9885521685548412
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWNkNmM0ZGRlNmYxYzIwNDk4OTI5MzIwZWU1NzZjZDVhMDcyNDFlMjBhNDQxODU5OWMwMWNhNGEzNjY3ZGUyOSIsInZlcnNpb24iOjF9.b15Fh70GwtlG3cSqPW-8VEZT2oy0CtgvgEOtWiYonOovjkIQ4RSLFVzVG-YfslaIyfg9RzMWzjhLnMY7Bpn2Aw
- type: f1
value: 0.9884019815052447
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmM4NjQ5Yjk5ODRhYTU1MTY3MmRhZDBmODM1NTg3OTFiNWM4NDRmYjI0MzZkNmQ1MzE3MzcxODZlYzBkYTMyYSIsInZlcnNpb24iOjF9.74RaDK8nBVuGRl2Se_-hwQvP6c4lvVxGHpcCWB4uZUCf2_HoC9NT9u7P3pMJfH_tK2cpV7U3VWGgSDhQDi-UBQ
- type: f1
value: 0.9885521685548412
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDRmYWRmMmQ0YjViZmQxMzhhYTUyOTE1MTc0ZDU1ZjQyZjFhMDYzYzMzZDE0NzZlYzQyOTBhMTBhNmM5NTlkMiIsInZlcnNpb24iOjF9.VMn_psdAHIZTlW6GbjERZDe8MHhwzJ0rbjV_VJyuMrsdOh5QDmko-wEvaBWNEdT0cEKsbggm-6jd3Gh81PfHAQ
- type: f1
value: 0.9885546181087554
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjUyZWFhZDZhMGQ3MzBmYmRiNDVmN2FkZDBjMjk3ODk0OTAxNGZkMWE0NzU5ZjI0NzE0NGZiNzM0N2Y2NDYyOSIsInZlcnNpb24iOjF9.YsXBhnzEEFEW6jw3mQlFUuIrW7Gabad2Ils-iunYJr-myg0heF8NEnEWABKFE1SnvCWt-69jkLza6SupeyLVCA
- type: loss
value: 0.040652573108673096
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTc3YjU3MjdjMzkxODA5MjU5NGUyY2NkMGVhZDg3ZWEzMmU1YWVjMmI0NmU2OWEyZTkzMTVjNDZiYTc0YjIyNCIsInZlcnNpb24iOjF9.lA90qXZVYiILHMFlr6t6H81Oe8a-4KmeX-vyCC1BDia2ofudegv6Vb46-4RzmbtuKeV6yy6YNNXxXxqVak1pAg
---
# DistilBERT base uncased finetuned SST-2
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
## Model Details
**Model Description:** This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
- **Developed by:** Hugging Face
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased).
- **Resources for more information:**
- [Model Documentation](https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#transformers.DistilBertForSequenceClassification)
- [DistilBERT paper](https://arxiv.org/abs/1910.01108)
## How to Get Started With the Model
Example of single-label classification:
```python
import torch
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
```
## Uses
#### Direct Use
This model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
For instance, for sentences like `This film was filmed in COUNTRY`, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this [colab](https://colab.research.google.com/gist/ageron/fb2f64fb145b4bc7c49efc97e5f114d3/biasmap.ipynb), [Aurélien Géron](https://twitter.com/aureliengeron) made an interesting map plotting these probabilities for each country.
<img src="https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/map.jpeg" alt="Map of positive probabilities per country." width="500"/>
We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: [WinoBias](https://huggingface.co/datasets/wino_bias), [WinoGender](https://huggingface.co/datasets/super_glue), [Stereoset](https://huggingface.co/datasets/stereoset).
# Training
#### Training Data
The authors use the following Stanford Sentiment Treebank([sst2](https://huggingface.co/datasets/sst2)) corpora for the model.
#### Training Procedure
###### Fine-tuning hyper-parameters
- learning_rate = 1e-5
- batch_size = 32
- warmup = 600
- max_seq_length = 128
- num_train_epochs = 3.0
|
paulrouge/test-lora-3
|
paulrouge
| 2023-12-19T16:17:46Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:TheBloke/MythoMax-L2-13B-GPTQ",
"base_model:adapter:TheBloke/MythoMax-L2-13B-GPTQ",
"region:us"
] | null | 2023-12-19T16:13:44Z |
---
library_name: peft
base_model: TheBloke/MythoMax-L2-13B-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Jackman4399/ppo-Huggy
|
Jackman4399
| 2023-12-19T16:13:42Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-12-19T16:13:33Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Jackman4399/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hkivancoral/smids_10x_deit_small_sgd_0001_fold3
|
hkivancoral
| 2023-12-19T16:10:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T15:14:49Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.855
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3901
- Accuracy: 0.855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9824 | 1.0 | 750 | 1.0331 | 0.435 |
| 0.9063 | 2.0 | 1500 | 0.9735 | 0.5233 |
| 0.8503 | 3.0 | 2250 | 0.9109 | 0.59 |
| 0.7679 | 4.0 | 3000 | 0.8466 | 0.645 |
| 0.7248 | 5.0 | 3750 | 0.7860 | 0.69 |
| 0.6585 | 6.0 | 4500 | 0.7311 | 0.7167 |
| 0.6129 | 7.0 | 5250 | 0.6856 | 0.7283 |
| 0.6082 | 8.0 | 6000 | 0.6417 | 0.7617 |
| 0.581 | 9.0 | 6750 | 0.6068 | 0.7683 |
| 0.5231 | 10.0 | 7500 | 0.5777 | 0.7767 |
| 0.5113 | 11.0 | 8250 | 0.5554 | 0.7833 |
| 0.4834 | 12.0 | 9000 | 0.5347 | 0.8 |
| 0.5002 | 13.0 | 9750 | 0.5194 | 0.8067 |
| 0.5244 | 14.0 | 10500 | 0.5049 | 0.8117 |
| 0.478 | 15.0 | 11250 | 0.4926 | 0.8183 |
| 0.4573 | 16.0 | 12000 | 0.4823 | 0.8183 |
| 0.4332 | 17.0 | 12750 | 0.4737 | 0.8233 |
| 0.4552 | 18.0 | 13500 | 0.4642 | 0.8283 |
| 0.4717 | 19.0 | 14250 | 0.4573 | 0.8283 |
| 0.4284 | 20.0 | 15000 | 0.4511 | 0.8283 |
| 0.418 | 21.0 | 15750 | 0.4442 | 0.835 |
| 0.4355 | 22.0 | 16500 | 0.4394 | 0.8417 |
| 0.442 | 23.0 | 17250 | 0.4349 | 0.84 |
| 0.4592 | 24.0 | 18000 | 0.4307 | 0.845 |
| 0.4174 | 25.0 | 18750 | 0.4266 | 0.8483 |
| 0.4133 | 26.0 | 19500 | 0.4227 | 0.8483 |
| 0.3538 | 27.0 | 20250 | 0.4190 | 0.8517 |
| 0.4061 | 28.0 | 21000 | 0.4159 | 0.8533 |
| 0.4077 | 29.0 | 21750 | 0.4132 | 0.8517 |
| 0.4051 | 30.0 | 22500 | 0.4109 | 0.8533 |
| 0.3404 | 31.0 | 23250 | 0.4086 | 0.8517 |
| 0.353 | 32.0 | 24000 | 0.4061 | 0.855 |
| 0.3864 | 33.0 | 24750 | 0.4039 | 0.8567 |
| 0.3572 | 34.0 | 25500 | 0.4020 | 0.8567 |
| 0.3431 | 35.0 | 26250 | 0.4002 | 0.8567 |
| 0.3693 | 36.0 | 27000 | 0.3992 | 0.8567 |
| 0.3706 | 37.0 | 27750 | 0.3978 | 0.8567 |
| 0.423 | 38.0 | 28500 | 0.3964 | 0.855 |
| 0.3909 | 39.0 | 29250 | 0.3953 | 0.855 |
| 0.41 | 40.0 | 30000 | 0.3943 | 0.855 |
| 0.3387 | 41.0 | 30750 | 0.3933 | 0.855 |
| 0.3698 | 42.0 | 31500 | 0.3927 | 0.855 |
| 0.3644 | 43.0 | 32250 | 0.3919 | 0.855 |
| 0.3722 | 44.0 | 33000 | 0.3914 | 0.8567 |
| 0.3269 | 45.0 | 33750 | 0.3910 | 0.8567 |
| 0.3532 | 46.0 | 34500 | 0.3906 | 0.8567 |
| 0.3899 | 47.0 | 35250 | 0.3904 | 0.8567 |
| 0.3783 | 48.0 | 36000 | 0.3902 | 0.855 |
| 0.3767 | 49.0 | 36750 | 0.3901 | 0.855 |
| 0.3232 | 50.0 | 37500 | 0.3901 | 0.855 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_SystemError0.8_Seed104
|
behzadnet
| 2023-12-19T15:55:50Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-19T15:55:45Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
florianehmann/xlm-roberta-base-finetuned-panx-all
|
florianehmann
| 2023-12-19T15:55:20Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-19T15:50:18Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2026
- F1: 0.8552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3091 | 1.0 | 2503 | 0.2188 | 0.8071 |
| 0.1764 | 2.0 | 5006 | 0.1946 | 0.8450 |
| 0.1127 | 3.0 | 7509 | 0.2026 | 0.8552 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.0
|
dataautogpt3/dpo-sdxl-merged
|
dataautogpt3
| 2023-12-19T15:51:34Z | 0 | 13 | null |
[
"license:mit",
"region:us"
] | null | 2023-12-19T15:36:54Z |
---
license: mit
---
Juggernaut 7XL and DPO Merge to safetensors format.
This repository contains models that have been converted to the SafeTensors format by merging the "Juggernaut 7XL" and "DPO" models. The repository includes the following models:
- UNet: A model converted from DPO to SafeTensors format.
- VAE (Variational Autoencoder): A model converted from Juggernaut 7XL to SafeTensors format.
- CLIP: A model converted from Juggernaut 7XL to SafeTensors format.
|
emilianovilas/portraits
|
emilianovilas
| 2023-12-19T15:50:20Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:unknown",
"region:us"
] |
text-to-image
| 2023-12-19T15:49:35Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0(\0b\0e\0s\0t\0_\0q\0u\0a\0l\0i\0t\0y\0)\0,\0(\0u\0l\0t\0r\0a\0_\0d\0e\0t\0a\0i\0l\0e\0d\0)\0,\0 \0p\0h\0o\0t\0o\0 \0o\0f\0 \0b\0e\0a\0u\0t\0i\0f\0u\0l\0 \0a\0g\0e\0 \01\08\0 \0g\0i\0r\0l\0,\0 \0p\0a\0s\0t\0e\0l\0 \0h\0a\0i\0r\0,\0 \0f\0r\0e\0c\0k\0l\0e\0s\0 \0s\0e\0x\0y\0,\0 \0b\0e\0a\0u\0t\0i\0f\0u\0l\0,\0 \0c\0l\0o\0s\0e\0 \0u\0p\0,\0 \0y\0o\0u\0n\0g\0,\0 \0d\0s\0l\0r\0,\0 \08\0k\0,\0 \04\0k\0,\0 \0u\0l\0t\0r\0a\0r\0e\0a\0l\0i\0s\0t\0i\0c\0,\0 \0r\0e\0a\0l\0i\0s\0t\0i\0c\0,\0 \0n\0a\0t\0u\0r\0a\0l\0 \0s\0k\0i\0n\0,\0 \0t\0e\0x\0t\0u\0r\0e\0d\0 \0s\0k\0i\0n\0,\0 \0<\0l\0o\0r\0a\0:\0p\0o\0r\0t\0r\0a\0i\0t\0:\01\0>\0"
output:
url: images/16.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
license: unknown
---
# portraits
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/emilianovilas/portraits/tree/main) them in the Files & versions tab.
|
Divyanshu97/donut-bs-level-v2
|
Divyanshu97
| 2023-12-19T15:49:11Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-12-19T04:38:38Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-bs-level-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-bs-level-v2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jiajunzhu/comp550
|
jiajunzhu
| 2023-12-19T15:41:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-19T15:22:47Z |
---
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** [decoder-only model]
- **Finetuned from model [optional]:** [meta-llama/Llama-2-7b]
## Bias, Risks, and Limitations
The data will be deleted because it use llama2 weights, only for TA and professor to grade properly
[More Information Needed]
## Training Details
### Training Data
c-lang8
|
hkivancoral/smids_5x_deit_small_sgd_0001_fold4
|
hkivancoral
| 2023-12-19T15:32:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T13:53:25Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_small_sgd_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8233333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_small_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4830
- Accuracy: 0.8233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0422 | 1.0 | 375 | 1.0355 | 0.445 |
| 0.9877 | 2.0 | 750 | 0.9987 | 0.5017 |
| 0.9301 | 3.0 | 1125 | 0.9591 | 0.5367 |
| 0.9069 | 4.0 | 1500 | 0.9204 | 0.5917 |
| 0.8815 | 5.0 | 1875 | 0.8838 | 0.6217 |
| 0.8208 | 6.0 | 2250 | 0.8478 | 0.6383 |
| 0.7819 | 7.0 | 2625 | 0.8141 | 0.6817 |
| 0.7955 | 8.0 | 3000 | 0.7823 | 0.7033 |
| 0.7492 | 9.0 | 3375 | 0.7528 | 0.7233 |
| 0.7403 | 10.0 | 3750 | 0.7259 | 0.7317 |
| 0.7047 | 11.0 | 4125 | 0.7009 | 0.745 |
| 0.6669 | 12.0 | 4500 | 0.6790 | 0.76 |
| 0.6557 | 13.0 | 4875 | 0.6594 | 0.7667 |
| 0.6563 | 14.0 | 5250 | 0.6418 | 0.77 |
| 0.5999 | 15.0 | 5625 | 0.6263 | 0.7667 |
| 0.589 | 16.0 | 6000 | 0.6125 | 0.77 |
| 0.5618 | 17.0 | 6375 | 0.5999 | 0.7767 |
| 0.5666 | 18.0 | 6750 | 0.5885 | 0.7817 |
| 0.6067 | 19.0 | 7125 | 0.5784 | 0.7867 |
| 0.5796 | 20.0 | 7500 | 0.5694 | 0.79 |
| 0.547 | 21.0 | 7875 | 0.5612 | 0.7883 |
| 0.5698 | 22.0 | 8250 | 0.5540 | 0.7867 |
| 0.5377 | 23.0 | 8625 | 0.5473 | 0.7917 |
| 0.5508 | 24.0 | 9000 | 0.5411 | 0.7967 |
| 0.5752 | 25.0 | 9375 | 0.5355 | 0.7983 |
| 0.5019 | 26.0 | 9750 | 0.5303 | 0.8 |
| 0.5146 | 27.0 | 10125 | 0.5255 | 0.8017 |
| 0.5114 | 28.0 | 10500 | 0.5210 | 0.8033 |
| 0.4588 | 29.0 | 10875 | 0.5170 | 0.8033 |
| 0.5045 | 30.0 | 11250 | 0.5133 | 0.805 |
| 0.5118 | 31.0 | 11625 | 0.5098 | 0.805 |
| 0.4619 | 32.0 | 12000 | 0.5067 | 0.8083 |
| 0.4796 | 33.0 | 12375 | 0.5037 | 0.81 |
| 0.5217 | 34.0 | 12750 | 0.5011 | 0.81 |
| 0.4423 | 35.0 | 13125 | 0.4986 | 0.8133 |
| 0.4692 | 36.0 | 13500 | 0.4964 | 0.815 |
| 0.4889 | 37.0 | 13875 | 0.4944 | 0.815 |
| 0.487 | 38.0 | 14250 | 0.4925 | 0.82 |
| 0.5206 | 39.0 | 14625 | 0.4909 | 0.82 |
| 0.4988 | 40.0 | 15000 | 0.4894 | 0.82 |
| 0.4485 | 41.0 | 15375 | 0.4881 | 0.8217 |
| 0.4284 | 42.0 | 15750 | 0.4870 | 0.8217 |
| 0.4979 | 43.0 | 16125 | 0.4860 | 0.8217 |
| 0.454 | 44.0 | 16500 | 0.4851 | 0.8217 |
| 0.4865 | 45.0 | 16875 | 0.4845 | 0.8217 |
| 0.4847 | 46.0 | 17250 | 0.4839 | 0.8217 |
| 0.5681 | 47.0 | 17625 | 0.4835 | 0.8217 |
| 0.4795 | 48.0 | 18000 | 0.4832 | 0.8217 |
| 0.4757 | 49.0 | 18375 | 0.4831 | 0.8233 |
| 0.4471 | 50.0 | 18750 | 0.4830 | 0.8233 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
tsunemoto/WizardMath-7B-V1.1-GGUF
|
tsunemoto
| 2023-12-19T15:26:43Z | 29 | 2 | null |
[
"gguf",
"GGUF",
"en",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"endpoints_compatible",
"region:us"
] | null | 2023-12-19T15:18:06Z |
---
title: "WizardMath-7B-V1.1 Quantized in GGUF"
tags:
- GGUF
language: en
---

# Tsunemoto GGUF's of WizardMath-7B-V1.1
This is a GGUF quantization of WizardMath-7B-V1.1.
## Original Repo Link:
[Original Repository](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
## Original Model Card:
---
## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF)
<p style="font-size:28px;" align="center">
🏠 <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p>
<p align="center">
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> </p>
<p align="center">
📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News
[12/19/2023] 🔥 We released **WizardMath-7B-V1.1** trained from Mistral-7B, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH.
[12/19/2023] 🔥 **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1.
[12/19/2023] 🔥 **WizardMath-7B-V1.1** is comparable with **ChatGPT 3.5**, **Gemini Pro**, and surpasses **Mixtral MOE** on MATH pass@1.
| Model | Checkpoint | Paper | GSM8k | MATH |
| ----- |------| ---- |------|-------|
| **WizardMath-7B-V1.1** | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **83.2** | **33.0** |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** |
## [12/19/2023] Comparing WizardMath-7B-V1.1 with other open source 7B size math LLMs.
| Model | GSM8k Pass@1 | MATH Pass@1 |
| ----- |------| ---- |
| MPT-7B | 6.8 | 3.0 |
|Llama 1-7B | 11.0 | 2.9 |
|Llama 2-7B|12.3 |2.8 |
|Yi-6b| 32.6 |5.8 |
|Mistral-7B|37.8 |9.1 |
|Qwen-7b|47.8 |9.3 |
| RFT-7B | 50.3 | -- |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| WizardMath-7B-V1.0 | 54.9 | 10.7 |
|Abel-7B-001 |59.7 |13 |
| MetaMath-7B | 66.5 | 19.8 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
|MetaMath-Mistral-7B|77.7 |28.2 |
|Abel-7B-002 | 80.4 | 29.5 |
| **WizardMath-7B-V1.1** | **83.2** | **33.0** |
## [12/19/2023] Comparing WizardMath-7B-V1.1 with large open source (30B~70B) LLMs.
| Model | GSM8k Pass@1 | MATH Pass@1 |
| ----- |------| ---- |
| Llemma-34B | 51.5 | 25.0 |
| Minerva-62B | 52.4 | 27.6 |
| Llama 2-70B | 56.8 | 13.5 |
| DeepSeek 67B | 63.4 | -- |
| Gork 33B | 62.9 | 23.9 |
| MAmmoTH-70B | 72.4 | 21.1 |
| Yi-34B | 67.9 | 15.9 |
| Mixtral 8x7B | 74.4 | 28.4 |
| MetaMath-70B | 82.3 | 26.6 |
| **WizardMath-7B-V1.1** | **83.2** | **33.0** |
🔥
❗<b>Note for model system prompts usage:</b>
Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**.
**Default version:**
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
```
**CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.)
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
```
## Inference WizardMath Demo Script
We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
## Citation
Please cite the repo if you use the data, method or code in this repo.
```
@article{luo2023wizardmath,
title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct},
author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei},
journal={arXiv preprint arXiv:2308.09583},
year={2023}
}
```
|
openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf
|
openthaigpt
| 2023-12-19T15:26:24Z | 1,615 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"openthaigpt",
"th",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-18T13:10:35Z |
---
license: apache-2.0
language:
- th
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- openthaigpt
- llama
---
# 🇹🇭 OpenThaiGPT 13b 1.0.0-beta Chat with 16 bits in Huggingface's format.
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Fb8eiMDaqiEQL6ahbAY0h%2Fimage.png?alt=media&token=6fce78fd-2cca-4c0a-9648-bd5518e644ce
https://openthaigpt.aieat.or.th/" width="200px">
🇹🇭 OpenThaiGPT 13b Version 1.0.0-beta is a Thai language 13B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions and extend more than 10,000 most popular Thai words vocabularies into LLM's dictionary for turbo speed.
## Licenses
**Source Code**: License Apache Software License 2.0.<br>
**Weight**: Research and **Commercial uses**.<br>
## Codes and Weight
**Finetune Code**: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta<br>
**Inference Code**: https://github.com/OpenThaiGPT/openthaigpt<br>
**Weight (Huggingface Checkpoint)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf
## Sponsors
<img src="https://cdn-uploads.huggingface.co/production/uploads/5fcd9c426d942eaf4d1ebd30/42d-GioSs4evIdNuMAaPB.png" width="600px">
## Supports
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
- E-mail: [email protected]
## Description
Prompt format is Llama2
```
<s>[INST] <<SYS>>
system_prompt
<</SYS>>
question [/INST]
```
System prompt:
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
## How to use
1. install VLLM (https://github.com/vllm-project/vllm)
2. python -m vllm.entrypoints.api_server --model /path/to/model --tensor-parallel-size num_gpus
3. run inference (CURL example)
```
curl --request POST \
--url http://localhost:8000/generate \
--header "Content-Type: application/json" \
--data '{"prompt": "<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด\n<</SYS>>\n\nอยากลดความอ้วนต้องทำอย่างไร [/INST]","use_beam_search": false, "temperature": 0.1, "max_tokens": 512, "top_p": 0.75, "top_k": 40, "frequency_penalty": 0.3 "stop": "</s>"}'
```
### Authors
* Kobkrit Viriyayudhakorn ([email protected])
* Sumeth Yuenyong ([email protected])
* Thaweewat Rugsujarit ([email protected])
* Jillaphat Jaroenkantasima ([email protected])
* Norapat Buppodom ([email protected])
* Koravich Sangkaew ([email protected])
* Peerawat Rojratchadakorn ([email protected])
* Surapon Nonesung ([email protected])
* Chanon Utupon ([email protected])
* Sadhis Wongprayoon ([email protected])
* Nucharee Thongthungwong ([email protected])
* Chawakorn Phiantham ([email protected])
* Patteera Triamamornwooth ([email protected])
* Nattarika Juntarapaoraya ([email protected])
* Kriangkrai Saetan ([email protected])
* Pitikorn Khlaisamniang ([email protected])
<i>Disclaimer: Provided responses are not guaranteed.</i>
|
N7D7/lucia_LoRA_1200
|
N7D7
| 2023-12-19T15:19:25Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stablediffusionapi/juggernaut-xl-v7",
"base_model:adapter:stablediffusionapi/juggernaut-xl-v7",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-19T15:19:17Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stablediffusionapi/juggernaut-xl-v7
instance_prompt: a photo of TOK luciavarelaarroyo
license: openrail++
---
# SDXL LoRA DreamBooth - N7D7/lucia_LoRA_1200
<Gallery />
## Model description
These are N7D7/lucia_LoRA_1200 LoRA adaption weights for stablediffusionapi/juggernaut-xl-v7.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK luciavarelaarroyo to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](N7D7/lucia_LoRA_1200/tree/main) them in the Files & versions tab.
|
sdpkjc/Humanoid-v4-sac_continuous_action-seed5
|
sdpkjc
| 2023-12-19T15:19:20Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Humanoid-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T15:18:38Z |
---
tags:
- Humanoid-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v4
type: Humanoid-v4
metrics:
- type: mean_reward
value: 5742.10 +/- 32.39
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Humanoid-v4**
This is a trained model of a SAC agent playing Humanoid-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Humanoid-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed5/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed5/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed5/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Humanoid-v4 --seed 5 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Humanoid-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 5,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
judithrosell/BC5CDR_BlueBERT_NER
|
judithrosell
| 2023-12-19T15:17:45Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12",
"base_model:finetune:bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-19T15:02:24Z |
---
license: cc0-1.0
base_model: bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12
tags:
- generated_from_trainer
model-index:
- name: BC5CDR_BlueBERT_NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BC5CDR_BlueBERT_NER
This model is a fine-tuned version of [bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12](https://huggingface.co/bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0944
- Seqeval classification report: precision recall f1-score support
Chemical 0.84 0.89 0.87 7079
Disease 0.82 0.85 0.83 4968
micro avg 0.83 0.87 0.85 12047
macro avg 0.83 0.87 0.85 12047
weighted avg 0.83 0.87 0.85 12047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Seqeval classification report |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 143 | 0.1111 | precision recall f1-score support
Chemical 0.82 0.86 0.84 7079
Disease 0.76 0.83 0.80 4968
micro avg 0.79 0.85 0.82 12047
macro avg 0.79 0.85 0.82 12047
weighted avg 0.79 0.85 0.82 12047
|
| No log | 2.0 | 286 | 0.0987 | precision recall f1-score support
Chemical 0.83 0.89 0.86 7079
Disease 0.78 0.86 0.82 4968
micro avg 0.81 0.88 0.84 12047
macro avg 0.80 0.87 0.84 12047
weighted avg 0.81 0.88 0.84 12047
|
| No log | 3.0 | 429 | 0.0944 | precision recall f1-score support
Chemical 0.84 0.89 0.87 7079
Disease 0.82 0.85 0.83 4968
micro avg 0.83 0.87 0.85 12047
macro avg 0.83 0.87 0.85 12047
weighted avg 0.83 0.87 0.85 12047
|
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
andakm/cats_classifier
|
andakm
| 2023-12-19T15:16:07Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T15:14:30Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: andakm/cats_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# andakm/cats_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6069
- Train Accuracy: 0.7143
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.8083 | 0.2857 | 0 |
| 1.7613 | 0.5714 | 1 |
| 1.7004 | 0.7143 | 2 |
| 1.6459 | 0.7143 | 3 |
| 1.6069 | 0.7143 | 4 |
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
GigaxCoder/mistral-7b-finetuned-giga-coder
|
GigaxCoder
| 2023-12-19T15:15:31Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-12-19T15:15:03Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
hkivancoral/smids_10x_deit_small_sgd_0001_fold2
|
hkivancoral
| 2023-12-19T15:14:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T14:18:49Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.826955074875208
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4097
- Accuracy: 0.8270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9968 | 1.0 | 750 | 1.0169 | 0.4659 |
| 0.9174 | 2.0 | 1500 | 0.9543 | 0.5308 |
| 0.8121 | 3.0 | 2250 | 0.8838 | 0.6273 |
| 0.7871 | 4.0 | 3000 | 0.8228 | 0.6522 |
| 0.691 | 5.0 | 3750 | 0.7665 | 0.6922 |
| 0.6733 | 6.0 | 4500 | 0.7184 | 0.7271 |
| 0.611 | 7.0 | 5250 | 0.6739 | 0.7488 |
| 0.5495 | 8.0 | 6000 | 0.6348 | 0.7537 |
| 0.5871 | 9.0 | 6750 | 0.6046 | 0.7587 |
| 0.5362 | 10.0 | 7500 | 0.5781 | 0.7754 |
| 0.5478 | 11.0 | 8250 | 0.5567 | 0.7754 |
| 0.5521 | 12.0 | 9000 | 0.5409 | 0.7804 |
| 0.475 | 13.0 | 9750 | 0.5265 | 0.7787 |
| 0.4124 | 14.0 | 10500 | 0.5147 | 0.7887 |
| 0.4689 | 15.0 | 11250 | 0.5048 | 0.7870 |
| 0.4042 | 16.0 | 12000 | 0.4956 | 0.7903 |
| 0.3787 | 17.0 | 12750 | 0.4873 | 0.7937 |
| 0.4203 | 18.0 | 13500 | 0.4799 | 0.7937 |
| 0.4173 | 19.0 | 14250 | 0.4729 | 0.7987 |
| 0.4444 | 20.0 | 15000 | 0.4676 | 0.8020 |
| 0.4225 | 21.0 | 15750 | 0.4619 | 0.8020 |
| 0.3886 | 22.0 | 16500 | 0.4572 | 0.8070 |
| 0.3882 | 23.0 | 17250 | 0.4523 | 0.8120 |
| 0.3793 | 24.0 | 18000 | 0.4484 | 0.8103 |
| 0.4027 | 25.0 | 18750 | 0.4443 | 0.8136 |
| 0.4864 | 26.0 | 19500 | 0.4411 | 0.8136 |
| 0.4229 | 27.0 | 20250 | 0.4378 | 0.8153 |
| 0.4258 | 28.0 | 21000 | 0.4349 | 0.8153 |
| 0.3905 | 29.0 | 21750 | 0.4322 | 0.8170 |
| 0.4099 | 30.0 | 22500 | 0.4297 | 0.8170 |
| 0.3721 | 31.0 | 23250 | 0.4276 | 0.8186 |
| 0.4104 | 32.0 | 24000 | 0.4255 | 0.8203 |
| 0.3815 | 33.0 | 24750 | 0.4237 | 0.8220 |
| 0.3966 | 34.0 | 25500 | 0.4218 | 0.8220 |
| 0.4057 | 35.0 | 26250 | 0.4202 | 0.8220 |
| 0.4004 | 36.0 | 27000 | 0.4187 | 0.8220 |
| 0.3921 | 37.0 | 27750 | 0.4174 | 0.8220 |
| 0.4046 | 38.0 | 28500 | 0.4161 | 0.8220 |
| 0.3819 | 39.0 | 29250 | 0.4149 | 0.8220 |
| 0.4626 | 40.0 | 30000 | 0.4139 | 0.8236 |
| 0.4062 | 41.0 | 30750 | 0.4130 | 0.8236 |
| 0.3793 | 42.0 | 31500 | 0.4123 | 0.8253 |
| 0.3246 | 43.0 | 32250 | 0.4116 | 0.8253 |
| 0.3382 | 44.0 | 33000 | 0.4110 | 0.8270 |
| 0.3636 | 45.0 | 33750 | 0.4106 | 0.8270 |
| 0.4008 | 46.0 | 34500 | 0.4102 | 0.8270 |
| 0.3708 | 47.0 | 35250 | 0.4099 | 0.8270 |
| 0.3436 | 48.0 | 36000 | 0.4098 | 0.8270 |
| 0.3738 | 49.0 | 36750 | 0.4097 | 0.8270 |
| 0.373 | 50.0 | 37500 | 0.4097 | 0.8270 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
sdpkjc/Walker2d-v4-sac_continuous_action-seed3
|
sdpkjc
| 2023-12-19T15:13:02Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Walker2d-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T15:12:47Z |
---
tags:
- Walker2d-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v4
type: Walker2d-v4
metrics:
- type: mean_reward
value: 4471.15 +/- 1896.34
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Walker2d-v4**
This is a trained model of a SAC agent playing Walker2d-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Walker2d-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed3/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed3/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Walker2d-v4 --seed 3 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Walker2d-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 3,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cometrain/moexT5
|
cometrain
| 2023-12-19T15:08:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"Cometrain AutoCode",
"Cometrain AlphaML",
"moex",
"en",
"ru",
"dataset:financial-sentiment-analysis",
"dataset:moscow-exchange-market",
"dataset:financial_phrasebank",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-10T06:33:40Z |
---
language:
- en
- ru
license: mit
tags:
- Cometrain AutoCode
- Cometrain AlphaML
- moex
datasets:
- financial-sentiment-analysis
- moscow-exchange-market
- financial_phrasebank
widget:
- text: "April 14 (Reuters) - Rio Tinto (RIO.AX), one of the largest Australian mining companies, on Thursday confirmed its exit from the state mining lobby group after raising concerns that its policy on expansion of coal mines did not align with the Paris Climate Agreement."
example_title: "Rio Tinto Decision (Neutral)"
- text: "LONDON, April 13 (Reuters) - Crypto lender Nexo said it has teamed up with global payments company Mastercard (MA.N) to launch on Wednesday what it calls the world's first crypto-backed payment card."
example_title: "New Mastercard & Nexo project (Positive)"
- text: "April 14 (Reuters) - The Russian rouble weakened on Thursday, driven by expectations that Russia may relax its temporary capital control measures further, while stocks fell as the country continued what it calls 'a special military operation' in Ukraine."
example_title: "Crisis in Russia (Negative)"
inference:
parameters:
top_p: 0.9
temperature: 0.5
---
# moexT5
The stocks-news-t5 model was further trained based on Moscow Exchange data obtained with AlgoPack(https://www.moex.com/ru/algopack)
## stocks-news-t5
This model has been automatically fine-tuned and tested as part of the development of the GPT-2-based AutoML framework for accelerated and easy development of NLP enterprise solutions. Fine-tuned [T5](https://huggingface.co/t5-base) allows to analyze financial market news.
Automatically trained on [Financial Sentiment Analysis(2022)](https://www.kaggle.com/datasets/sbhatti/financial-sentiment-analysis) dataset.
## Made with Cometrain AlphaML & AutoCode
This model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode
## Cometrain AlphaML command
```shell
$ cometrain create --name stocks-news --model auto --task 'Machine learning model for finance news analysis' --output transformers
```
|
andakm/bmw_classifier
|
andakm
| 2023-12-19T15:06:22Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-17T18:35:39Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: andakm/bmw_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# andakm/bmw_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1751
- Train Accuracy: 0.7941
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2040, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.3531 | 0.7353 | 0 |
| 0.3083 | 0.7941 | 1 |
| 0.2895 | 0.6863 | 2 |
| 0.2210 | 0.7843 | 3 |
| 0.1751 | 0.7941 | 4 |
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ppbrown/sd-dpo-convenience
|
ppbrown
| 2023-12-19T14:50:26Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-12-19T14:42:45Z |
These files exist only as a convenience copy of the files under
https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1
and
https://huggingface.co/mhdang/dpo-sdxl-text2image-v1
I did not create any of the work here. I only converted the unet files from mhdang into the more convenient "checkpoint" format.
|
Formid322/09lo-xqb3-fi6r-0
|
Formid322
| 2023-12-19T14:49:04Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"region:us"
] |
text-generation
| 2023-12-19T14:49:00Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
hamxea/Llama-2-7b-chat-hf-activity-fine-tuned-adapters-v3
|
hamxea
| 2023-12-19T14:48:01Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-12-19T14:47:58Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
dubik/ppo-SnowballTarget
|
dubik
| 2023-12-19T14:32:06Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-12-19T14:16:36Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dubik/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Draichi/ppo-Huggy
|
Draichi
| 2023-12-19T14:31:11Z | 1 | 1 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-12-19T14:31:08Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Draichi/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SuperAGI/mistral-7B-PoSE-32k
|
SuperAGI
| 2023-12-19T14:19:28Z | 12 | 14 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-15T12:50:15Z |
---
license: apache-2.0
---
# Model Card for Mistral-7B-32K-PoSE
The Mistral-7B-32K-PoSE Large Language Model (LLM) is a PoSE trained Mistral 7B for 32k context length.
The PoSE technique of extending the context length of Mistral-7B outperforms the passkey retrieval task with only a marginal impact on the standard benchmarks.
For full details of this model please read our [release blog post](https://superagi.com/extending-context-window-of-a-7b-llm-from-8k-to-32k-using-pose-positional-skip-wise).
# Results
## PassKey retrieval
<img src="https://cdn-uploads.huggingface.co/production/uploads/655b8d65a8ec3f330f2089c8/Ke8Ge8Xcw6A53PRnXjFE8.png" alt="Alt text" width="700" height="700">
The evaluation focuses on their effectiveness in passkey retrieval, highlighting the impact of varying context lengths on the models ability to extract crucial information. Our model excels in information extraction, capable of handling context lengths up to 32k, surpassing the limitations of the original Mistral7B model which could pass the test cases only if the context window was under 8k.
## Standard Benchmarking
<img src="https://cdn-uploads.huggingface.co/production/uploads/655b8d65a8ec3f330f2089c8/9AAOxpyoCh0UOFHIO8YkV.png" alt="Alt text" width="700" height="200">
Our model achieves an extension to 32k while only experiencing a marginal impact on the standard benchmark accuracy. This demonstrates a commendable ability to handle longer contexts without significantly compromising overall performance.
## Run the model
```python
from transformers import AutoTokenizer
from my_modeling_mistral import MistralForCausalLM
model_id = "SuperAGI/mistral-7B-PoSE-32k"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = MistralForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Limitations
The Mistral-7B-32K-PoSE model is a demonstration that the context length can be extended without losing much on the performance.
It does not have any moderation mechanisms. The model is not suitable for production usage as it doesn't have guardrails for toxicity, societal bias, and language limitations. We would love to collaborate with the community to build safer and better models.
## The SuperAGI AI Team
Ishaan Bhola, Mukunda NS, Rajat Chawla, Anmol Gautam, Arkajit Datta, Ayush Vatsal, Sukrit Chatterjee, Adarsh Jha, Adarsh Deep, Abhijeet Sinha, Rakesh Krishna.
|
hkivancoral/smids_10x_deit_small_sgd_00001_fold1
|
hkivancoral
| 2023-12-19T14:19:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T13:23:50Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5542570951585977
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9431
- Accuracy: 0.5543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0703 | 1.0 | 751 | 1.0730 | 0.4341 |
| 1.028 | 2.0 | 1502 | 1.0687 | 0.4341 |
| 1.0258 | 3.0 | 2253 | 1.0640 | 0.4374 |
| 1.0492 | 4.0 | 3004 | 1.0592 | 0.4391 |
| 1.036 | 5.0 | 3755 | 1.0544 | 0.4407 |
| 1.0185 | 6.0 | 4506 | 1.0494 | 0.4457 |
| 1.0044 | 7.0 | 5257 | 1.0445 | 0.4524 |
| 1.0106 | 8.0 | 6008 | 1.0397 | 0.4524 |
| 1.0008 | 9.0 | 6759 | 1.0351 | 0.4558 |
| 0.983 | 10.0 | 7510 | 1.0305 | 0.4624 |
| 0.9888 | 11.0 | 8261 | 1.0261 | 0.4691 |
| 0.98 | 12.0 | 9012 | 1.0217 | 0.4758 |
| 0.9777 | 13.0 | 9763 | 1.0175 | 0.4775 |
| 0.9805 | 14.0 | 10514 | 1.0134 | 0.4825 |
| 0.9554 | 15.0 | 11265 | 1.0095 | 0.4875 |
| 0.9727 | 16.0 | 12016 | 1.0055 | 0.4942 |
| 0.9405 | 17.0 | 12767 | 1.0016 | 0.4992 |
| 0.9669 | 18.0 | 13518 | 0.9980 | 0.5042 |
| 0.9407 | 19.0 | 14269 | 0.9944 | 0.5042 |
| 0.9487 | 20.0 | 15020 | 0.9909 | 0.5075 |
| 0.9336 | 21.0 | 15771 | 0.9876 | 0.5092 |
| 0.9505 | 22.0 | 16522 | 0.9843 | 0.5109 |
| 0.9425 | 23.0 | 17273 | 0.9812 | 0.5125 |
| 0.9422 | 24.0 | 18024 | 0.9782 | 0.5175 |
| 0.9397 | 25.0 | 18775 | 0.9753 | 0.5209 |
| 0.9277 | 26.0 | 19526 | 0.9725 | 0.5225 |
| 0.9248 | 27.0 | 20277 | 0.9699 | 0.5326 |
| 0.915 | 28.0 | 21028 | 0.9674 | 0.5342 |
| 0.9341 | 29.0 | 21779 | 0.9650 | 0.5376 |
| 0.9201 | 30.0 | 22530 | 0.9628 | 0.5392 |
| 0.8994 | 31.0 | 23281 | 0.9606 | 0.5376 |
| 0.9167 | 32.0 | 24032 | 0.9586 | 0.5392 |
| 0.8872 | 33.0 | 24783 | 0.9568 | 0.5426 |
| 0.8983 | 34.0 | 25534 | 0.9550 | 0.5426 |
| 0.8839 | 35.0 | 26285 | 0.9534 | 0.5442 |
| 0.9018 | 36.0 | 27036 | 0.9519 | 0.5476 |
| 0.8955 | 37.0 | 27787 | 0.9506 | 0.5492 |
| 0.8964 | 38.0 | 28538 | 0.9493 | 0.5492 |
| 0.9005 | 39.0 | 29289 | 0.9482 | 0.5492 |
| 0.8988 | 40.0 | 30040 | 0.9472 | 0.5526 |
| 0.8967 | 41.0 | 30791 | 0.9463 | 0.5543 |
| 0.8873 | 42.0 | 31542 | 0.9455 | 0.5543 |
| 0.9048 | 43.0 | 32293 | 0.9449 | 0.5543 |
| 0.8665 | 44.0 | 33044 | 0.9443 | 0.5543 |
| 0.8925 | 45.0 | 33795 | 0.9439 | 0.5543 |
| 0.8934 | 46.0 | 34546 | 0.9435 | 0.5543 |
| 0.8656 | 47.0 | 35297 | 0.9433 | 0.5543 |
| 0.9144 | 48.0 | 36048 | 0.9431 | 0.5543 |
| 0.9081 | 49.0 | 36799 | 0.9431 | 0.5543 |
| 0.8986 | 50.0 | 37550 | 0.9431 | 0.5543 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/smids_10x_deit_small_sgd_0001_fold1
|
hkivancoral
| 2023-12-19T14:18:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T13:22:59Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_0001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8464106844741235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4121
- Accuracy: 0.8464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9962 | 1.0 | 751 | 1.0240 | 0.4741 |
| 0.8829 | 2.0 | 1502 | 0.9630 | 0.5259 |
| 0.8203 | 3.0 | 2253 | 0.8958 | 0.6027 |
| 0.7637 | 4.0 | 3004 | 0.8327 | 0.6578 |
| 0.7321 | 5.0 | 3755 | 0.7782 | 0.6912 |
| 0.7017 | 6.0 | 4506 | 0.7239 | 0.7078 |
| 0.5812 | 7.0 | 5257 | 0.6809 | 0.7212 |
| 0.581 | 8.0 | 6008 | 0.6410 | 0.7346 |
| 0.5592 | 9.0 | 6759 | 0.6086 | 0.7513 |
| 0.5145 | 10.0 | 7510 | 0.5829 | 0.7679 |
| 0.5332 | 11.0 | 8261 | 0.5629 | 0.7746 |
| 0.4756 | 12.0 | 9012 | 0.5433 | 0.7796 |
| 0.4797 | 13.0 | 9763 | 0.5294 | 0.7846 |
| 0.4315 | 14.0 | 10514 | 0.5168 | 0.7930 |
| 0.4112 | 15.0 | 11265 | 0.5056 | 0.8013 |
| 0.4474 | 16.0 | 12016 | 0.4952 | 0.8030 |
| 0.4529 | 17.0 | 12767 | 0.4868 | 0.8097 |
| 0.421 | 18.0 | 13518 | 0.4802 | 0.8130 |
| 0.4112 | 19.0 | 14269 | 0.4730 | 0.8130 |
| 0.4039 | 20.0 | 15020 | 0.4670 | 0.8180 |
| 0.3219 | 21.0 | 15771 | 0.4615 | 0.8164 |
| 0.411 | 22.0 | 16522 | 0.4563 | 0.8180 |
| 0.3769 | 23.0 | 17273 | 0.4528 | 0.8214 |
| 0.4423 | 24.0 | 18024 | 0.4481 | 0.8214 |
| 0.4214 | 25.0 | 18775 | 0.4442 | 0.8230 |
| 0.4588 | 26.0 | 19526 | 0.4419 | 0.8280 |
| 0.3977 | 27.0 | 20277 | 0.4383 | 0.8314 |
| 0.4288 | 28.0 | 21028 | 0.4359 | 0.8297 |
| 0.3842 | 29.0 | 21779 | 0.4331 | 0.8331 |
| 0.38 | 30.0 | 22530 | 0.4307 | 0.8331 |
| 0.3344 | 31.0 | 23281 | 0.4288 | 0.8347 |
| 0.4273 | 32.0 | 24032 | 0.4264 | 0.8347 |
| 0.3923 | 33.0 | 24783 | 0.4244 | 0.8364 |
| 0.3452 | 34.0 | 25534 | 0.4233 | 0.8364 |
| 0.3666 | 35.0 | 26285 | 0.4214 | 0.8381 |
| 0.3806 | 36.0 | 27036 | 0.4199 | 0.8397 |
| 0.4471 | 37.0 | 27787 | 0.4189 | 0.8397 |
| 0.3236 | 38.0 | 28538 | 0.4183 | 0.8414 |
| 0.2974 | 39.0 | 29289 | 0.4171 | 0.8397 |
| 0.4164 | 40.0 | 30040 | 0.4161 | 0.8397 |
| 0.3819 | 41.0 | 30791 | 0.4153 | 0.8431 |
| 0.3798 | 42.0 | 31542 | 0.4146 | 0.8447 |
| 0.3898 | 43.0 | 32293 | 0.4139 | 0.8447 |
| 0.3508 | 44.0 | 33044 | 0.4133 | 0.8447 |
| 0.3647 | 45.0 | 33795 | 0.4128 | 0.8447 |
| 0.4056 | 46.0 | 34546 | 0.4125 | 0.8447 |
| 0.3591 | 47.0 | 35297 | 0.4123 | 0.8464 |
| 0.4233 | 48.0 | 36048 | 0.4121 | 0.8464 |
| 0.3734 | 49.0 | 36799 | 0.4121 | 0.8464 |
| 0.3779 | 50.0 | 37550 | 0.4121 | 0.8464 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Lolimorimorf/damage_trigger_effect_2023-12-19_14_11
|
Lolimorimorf
| 2023-12-19T14:16:32Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:Babelscape/wikineural-multilingual-ner",
"base_model:finetune:Babelscape/wikineural-multilingual-ner",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-19T14:16:03Z |
---
license: cc-by-nc-sa-4.0
base_model: Babelscape/wikineural-multilingual-ner
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: damage_trigger_effect_2023-12-19_14_11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# damage_trigger_effect_2023-12-19_14_11
This model is a fine-tuned version of [Babelscape/wikineural-multilingual-ner](https://huggingface.co/Babelscape/wikineural-multilingual-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6940
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 34 | 0.6583 | 0.0 | 0.0 | 0.0 | 0.8158 |
| No log | 2.0 | 68 | 0.5418 | 0.0 | 0.0 | 0.0 | 0.8094 |
| No log | 3.0 | 102 | 0.4800 | 0.0 | 0.0 | 0.0 | 0.8418 |
| No log | 4.0 | 136 | 0.4383 | 0.0 | 0.0 | 0.0 | 0.8579 |
| No log | 5.0 | 170 | 0.4956 | 0.0 | 0.0 | 0.0 | 0.8449 |
| No log | 6.0 | 204 | 0.5156 | 0.0 | 0.0 | 0.0 | 0.8591 |
| No log | 7.0 | 238 | 0.5127 | 0.0 | 0.0 | 0.0 | 0.8591 |
| No log | 8.0 | 272 | 0.5488 | 0.0 | 0.0 | 0.0 | 0.8529 |
| No log | 9.0 | 306 | 0.6051 | 0.0 | 0.0 | 0.0 | 0.8529 |
| No log | 10.0 | 340 | 0.6026 | 0.0 | 0.0 | 0.0 | 0.8605 |
| No log | 11.0 | 374 | 0.6523 | 0.0 | 0.0 | 0.0 | 0.8506 |
| No log | 12.0 | 408 | 0.6824 | 0.0 | 0.0 | 0.0 | 0.8520 |
| No log | 13.0 | 442 | 0.6777 | 0.0 | 0.0 | 0.0 | 0.8550 |
| No log | 14.0 | 476 | 0.7056 | 0.0 | 0.0 | 0.0 | 0.8508 |
| 0.2478 | 15.0 | 510 | 0.6940 | 0.0 | 0.0 | 0.0 | 0.8550 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
SecondTheFirst/ppo-Huggy
|
SecondTheFirst
| 2023-12-19T14:15:46Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-12-19T14:15:35Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SecondTheFirst/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
alexandreacff/zephyr-apostilas-v2-enem-finetuned
|
alexandreacff
| 2023-12-19T14:13:10Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | 2023-12-18T21:57:27Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr-apostilas-v2-enem-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-apostilas-v2-enem-finetuned
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 1.13.0+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
blanchon/sd-geolora3
|
blanchon
| 2023-12-19T14:12:01Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-12-19T13:32:56Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - blanchon/sd-geolora3
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the blanchon/merged_dataset dataset. You can find some example images in the following.




























|
Predict9731/speecht5_tts_voxpopuli_cs
|
Predict9731
| 2023-12-19T14:10:50Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"cs",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-12-08T14:10:40Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_tts_voxpopuli_cs
results: []
language:
- cs
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts_voxpopuli_cs
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5186 | 0.89 | 1000 | 0.4621 |
| 0.4844 | 1.78 | 2000 | 0.4437 |
| 0.4851 | 2.68 | 3000 | 0.4404 |
| 0.4799 | 3.57 | 4000 | 0.4382 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.15.0
|
luodian/OTTER-Image-MPT7B
|
luodian
| 2023-12-19T14:08:01Z | 374 | 11 |
transformers
|
[
"transformers",
"pytorch",
"otter",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-09T03:54:03Z |
---
license: mit
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/MKmyP9wH/new-banner.png" width="80%" height="80%">
</p>
<div>
<div align="center">
<a href='https://brianboli.com/' target='_blank'>Bo Li*<sup>1</sup></a> 
<a href='https://zhangyuanhan-ai.github.io/' target='_blank'>Yuanhan Zhang*<sup>,1</sup></a> 
<a href='https://cliangyu.com/' target='_blank'>Liangyu Chen*<sup>,1</sup></a> 
<a href='https://king159.github.io/' target='_blank'>Jinghao Wang*<sup>,1</sup></a> 
<a href='https://pufanyi.github.io/' target='_blank'>Fanyi Pu*<sup>,1</sup></a> 
</br>
<a href='https://jingkang50.github.io/' target='_blank'>Jingkang Yang<sup>1</sup></a> 
<a href='https://chunyuan.li/' target='_blank'>Chunyuan Li<sup>2</sup></a> 
<a href='https://liuziwei7.github.io/' target='_blank'>Ziwei Liu<sup>1</sup></a>
</div>
<div>
<div align="center">
<sup>1</sup>S-Lab, Nanyang Technological University 
<sup>2</sup>Microsoft Research, Redmond
</div>
You can refer the code to start evaluation and demo on your local machine.
https://github.com/Luodian/Otter/blob/8b386816ec67b15833cde3dcd1d7ca6a752d2451/pipeline/demos/demo_models.py#L35
|
EliottD/ppo-LunarLander-v2100000
|
EliottD
| 2023-12-19T14:02:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T14:01:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -6.52 +/- 147.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Nazaninmnd/DreamBooth_MediumLongShot
|
Nazaninmnd
| 2023-12-19T14:00:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-18T12:50:55Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a mls photo of human
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Nazaninmnd/DreamBooth_MediumLongShot
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a mls photo of human using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
EliottD/ppo-LunarLander-v210000
|
EliottD
| 2023-12-19T13:59:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T13:59:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -135.79 +/- 23.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Formid322/wyjj-4css-e4pz-0
|
Formid322
| 2023-12-19T13:58:25Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"region:us"
] |
text-generation
| 2023-12-19T13:58:20Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
EliottD/ppo-LunarLander-v210
|
EliottD
| 2023-12-19T13:58:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T13:54:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -131.71 +/- 88.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lawinsider/uk_ner_spacy
|
lawinsider
| 2023-12-19T13:52:27Z | 3 | 1 |
spacy
|
[
"spacy",
"token-classification",
"uk",
"dataset:lawinsider/uk_ner_contracts_spacy",
"model-index",
"region:us"
] |
token-classification
| 2023-11-13T15:48:31Z |
---
tags:
- spacy
- token-classification
language:
- uk
model-index:
- name: uk_ner_spacy
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9543899658
- name: NER Recall
type: recall
value: 0.9399213925
- name: NER F Score
type: f_score
value: 0.9471004243
datasets:
- lawinsider/uk_ner_contracts_spacy
---
| Feature | Description |
| --- | --- |
| **Name** | `uk_ner_spacy` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.6.1,<3.7.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `CLAUSE_NUMBER`, `CLAUSE_TITLE`, `CONTRACT_TYPE`, `DEFINITION_TITLE`, `MARGINAL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 94.71 |
| `ENTS_P` | 95.44 |
| `ENTS_R` | 93.99 |
| `TOK2VEC_LOSS` | 18944.45 |
| `NER_LOSS` | 38361.74 |
|
kajol/model_01
|
kajol
| 2023-12-19T13:47:58Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-12-18T23:22:15Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Lolimorimorf/feature_extraction_model_damage_trigger_effect_location_naacl_2025
|
Lolimorimorf
| 2023-12-19T13:47:09Z | 10 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-19T13:46:45Z |
---
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: damage_trigger_effect_2023-12-19_13_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# damage_trigger_effect_2023-12-19_13_42
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5476
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 34 | 0.6317 | 0.0 | 0.0 | 0.0 | 0.8083 |
| No log | 2.0 | 68 | 0.4823 | 0.0 | 0.0 | 0.0 | 0.8318 |
| No log | 3.0 | 102 | 0.4314 | 0.0 | 0.0 | 0.0 | 0.8563 |
| No log | 4.0 | 136 | 0.4323 | 0.0 | 0.0 | 0.0 | 0.8549 |
| No log | 5.0 | 170 | 0.4324 | 0.0 | 0.0 | 0.0 | 0.8586 |
| No log | 6.0 | 204 | 0.4647 | 0.0 | 0.0 | 0.0 | 0.8590 |
| No log | 7.0 | 238 | 0.4629 | 0.0 | 0.0 | 0.0 | 0.8686 |
| No log | 8.0 | 272 | 0.4958 | 0.0 | 0.0 | 0.0 | 0.8519 |
| No log | 9.0 | 306 | 0.4954 | 0.0 | 0.0 | 0.0 | 0.8675 |
| No log | 10.0 | 340 | 0.5220 | 0.0 | 0.0 | 0.0 | 0.8608 |
| No log | 11.0 | 374 | 0.5356 | 0.0 | 0.0 | 0.0 | 0.8616 |
| No log | 12.0 | 408 | 0.5416 | 0.0 | 0.0 | 0.0 | 0.8642 |
| No log | 13.0 | 442 | 0.5315 | 0.0 | 0.0 | 0.0 | 0.8660 |
| No log | 14.0 | 476 | 0.5496 | 0.0 | 0.0 | 0.0 | 0.8675 |
| 0.248 | 15.0 | 510 | 0.5476 | 0.0 | 0.0 | 0.0 | 0.8690 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
showrounak/bloom-song-lyrics
|
showrounak
| 2023-12-19T13:46:33Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-7b1",
"base_model:adapter:bigscience/bloom-7b1",
"region:us"
] | null | 2023-12-19T07:43:31Z |
---
library_name: peft
base_model: bigscience/bloom-7b1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
ntc-ai/SDXL-LoRA-slider.maniacal-laughter
|
ntc-ai
| 2023-12-19T13:36:05Z | 71 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-19T13:36:02Z |
---
language:
- en
thumbnail: "images/evaluate/maniacal laughter.../maniacal laughter_17_3.0.png"
widget:
- text: maniacal laughter
output:
url: images/maniacal laughter_17_3.0.png
- text: maniacal laughter
output:
url: images/maniacal laughter_19_3.0.png
- text: maniacal laughter
output:
url: images/maniacal laughter_20_3.0.png
- text: maniacal laughter
output:
url: images/maniacal laughter_21_3.0.png
- text: maniacal laughter
output:
url: images/maniacal laughter_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "maniacal laughter"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - maniacal laughter (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/maniacal laughter_17_-3.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_17_0.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_17_3.0.png" width=256 height=256 /> |
| <img src="images/maniacal laughter_19_-3.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_19_0.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_19_3.0.png" width=256 height=256 /> |
| <img src="images/maniacal laughter_20_-3.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_20_0.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
maniacal laughter
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.maniacal-laughter', weight_name='maniacal laughter.safetensors', adapter_name="maniacal laughter")
# Activate the LoRA
pipe.set_adapters(["maniacal laughter"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, maniacal laughter"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 480+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
phatjk/vinallama-7b-chat-AWQ
|
phatjk
| 2023-12-19T13:32:14Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-12-19T13:04:21Z |
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
|
nesuri/sorsolingo-asr-bsl
|
nesuri
| 2023-12-19T13:29:40Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"bsl",
"dataset:nesuri/sorsolingo-tts-bsl",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-16T18:27:06Z |
---
language:
- bsl
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- nesuri/sorsolingo-tts-bsl
model-index:
- name: Sorsolingo-asr-bsl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sorsolingo-asr-bsl
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the sorsolingo-asr-bsl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 450
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jeffyu87/whisper-medium-100steps
|
jeffyu87
| 2023-12-19T13:27:10Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"whisper",
"arxiv:1910.09700",
"base_model:openai/whisper-medium",
"base_model:adapter:openai/whisper-medium",
"region:us"
] | null | 2023-12-09T10:47:20Z |
---
library_name: peft
base_model: openai/whisper-medium
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0
|
N7D7/lucia_LoRA
|
N7D7
| 2023-12-19T13:27:06Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stablediffusionapi/juggernaut-xl-v7",
"base_model:adapter:stablediffusionapi/juggernaut-xl-v7",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-19T13:26:59Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stablediffusionapi/juggernaut-xl-v7
instance_prompt: a photo of TOK luciavarelaarroyo
license: openrail++
---
# SDXL LoRA DreamBooth - N7D7/lucia_LoRA
<Gallery />
## Model description
These are N7D7/lucia_LoRA LoRA adaption weights for stablediffusionapi/juggernaut-xl-v7.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK luciavarelaarroyo to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](N7D7/lucia_LoRA/tree/main) them in the Files & versions tab.
|
DataQueen/LAPSE_GEONAMES_RELOC
|
DataQueen
| 2023-12-19T13:18:46Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-19T11:09:17Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# LAPSE_GEONAMES_RELOC
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1188 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
geekradius/bart-large-cnn-fintetuned-samsum-repo
|
geekradius
| 2023-12-19T13:05:12Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summary",
"summerizer",
"summarization",
"en",
"dataset:gopalkalpande/bbc-news-summary",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-12-19T03:02:07Z |
---
license: bigscience-openrail-m
datasets:
- gopalkalpande/bbc-news-summary
language:
- en
metrics:
- rouge
library_name: transformers
pipeline_tag: summarization
tags:
- summary
- summerizer
---
|
AshanGimhana/THTestModelV2
|
AshanGimhana
| 2023-12-19T12:55:16Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-12-19T12:55:09Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: TinyPixel/Llama-2-7B-bf16-sharded
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [TinyPixel/Llama-2-7B-bf16-sharded](https://huggingface.co/TinyPixel/Llama-2-7B-bf16-sharded) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 120
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Sweta22/my-pet-cat
|
Sweta22
| 2023-12-19T12:53:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-19T12:49:03Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat- Dreambooth model trained by Sweta22 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: LCS2022050
Sample pictures of this concept:
.png)
|
sefercanapaydin/sdxl-lora-sefo
|
sefercanapaydin
| 2023-12-19T12:48:24Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-14T12:49:28Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of Sefo wearing a brown shirt, taking a selfie, and smiling.
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
espnet/kiritan_svs_rnn
|
espnet
| 2023-12-19T12:46:56Z | 2 | 0 |
espnet
|
[
"espnet",
"audio",
"singing-voice-synthesis",
"jp",
"dataset:kiritan",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2023-12-19T12:45:54Z |
---
tags:
- espnet
- audio
- singing-voice-synthesis
language: jp
datasets:
- kiritan
license: cc-by-4.0
---
## ESPnet2 SVS model
### `espnet/kiritan_svs_rnn`
This model was trained by ftshijt using kiritan recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 5c4d7cf7feba8461de2e1080bf82182f0efaef38
pip install -e .
cd egs2/kiritan/svs1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/kiritan_svs_rnn
```
## SVS config
<details><summary>expand</summary>
```
config: conf/tuning/train_naive_rnn_dp.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/svs_train_naive_rnn_dp_raw_phn_pyopenjtalk_jp
ngpu: 1
seed: 0
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 2
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_lora: false
save_lora_only: true
lora_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/svs_stats_raw_phn_pyopenjtalk_jp/train/text_shape.phn
- exp/svs_stats_raw_phn_pyopenjtalk_jp/train/singing_shape
valid_shape_file:
- exp/svs_stats_raw_phn_pyopenjtalk_jp/valid/text_shape.phn
- exp/svs_stats_raw_phn_pyopenjtalk_jp/valid/singing_shape
batch_type: sorted
valid_batch_type: null
fold_length:
- 150
- 240000
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
chunk_default_fs: null
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - dump/raw/tr_no_dev/wav.scp
- singing
- sound
- - dump/raw/tr_no_dev/label
- label
- duration
- - dump/raw/tr_no_dev/score.scp
- score
- score
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- singing
- sound
- - dump/raw/dev/label
- label
- duration
- - dump/raw/dev/score.scp
- score
- score
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- pau
- a
- i
- o
- e
- u
- k
- n
- r
- t
- m
- d
- s
- N
- sh
- g
- y
- b
- w
- cl
- ts
- z
- ch
- j
- h
- f
- p
- ky
- ry
- hy
- py
- ny
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pyopenjtalk
fs: 24000
score_feats_extract: syllable_score_feats
score_feats_extract_conf:
fs: 24000
n_fft: 2048
win_length: 1200
hop_length: 300
feats_extract: fbank
feats_extract_conf:
n_fft: 2048
hop_length: 300
win_length: 1200
fs: 24000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/svs_stats_raw_phn_pyopenjtalk_jp/train/feats_stats.npz
svs: naive_rnn_dp
svs_conf:
midi_dim: 129
embed_dim: 512
duration_dim: 500
eprenet_conv_layers: 0
eprenet_conv_chans: 256
eprenet_conv_filts: 3
elayers: 3
eunits: 256
ebidirectional: true
midi_embed_integration_type: add
dlayers: 2
dunits: 256
dbidirectional: true
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
use_batch_norm: true
reduction_factor: 1
eprenet_dropout_rate: 0.2
edropout_rate: 0.1
ddropout_rate: 0.1
postnet_dropout_rate: 0.5
init_type: pytorch
use_masking: true
pitch_extract: dio
pitch_extract_conf:
use_token_averaged_f0: false
fs: 24000
n_fft: 2048
hop_length: 300
f0max: 800
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/svs_stats_raw_phn_pyopenjtalk_jp/train/pitch_stats.npz
ying_extract: null
ying_extract_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202310'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{shi22d_interspeech,
author={Jiatong Shi and Shuai Guo and Tao Qian and Tomoki Hayashi and Yuning Wu and Fangzheng Xu and Xuankai Chang and Huazhe Li and Peter Wu and Shinji Watanabe and Qin Jin},
title={{Muskits: an End-to-end Music Processing Toolkit for Singing Voice Synthesis}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={4277--4281},
doi={10.21437/Interspeech.2022-10039}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
bartowski/Metis-0.4-exl2
|
bartowski
| 2023-12-19T12:42:49Z | 0 | 0 | null |
[
"text-generation",
"base_model:Mihaiii/Metis-0.3",
"base_model:finetune:Mihaiii/Metis-0.3",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-19T11:12:05Z |
---
base_model: Mihaiii/Metis-0.3
inference: false
license: apache-2.0
license_name: apache-2.0
metrics:
- accuracy
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Metis-0.4
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/Mihaiii/Metis-0.4
<a href="https://huggingface.co/bartowski/Metis-0.4-exl2/tree/4_0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Metis-0.4-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Metis-0.4-exl2/tree/6_0">6.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Metis-0.4-exl2/tree/8_0">8.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Metis-0.4-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Metis-0.4-exl2`:
```shell
mkdir Metis-0.4-exl2
huggingface-cli download bartowski/Metis-0.4-exl2 --local-dir Metis-0.4-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Metis-0.4-exl2
huggingface-cli download bartowski/Metis-0.4-exl2 --revision 4_0 --local-dir Metis-0.4-exl2 --local-dir-use-symlinks False
```
|
satani/phtben-8
|
satani
| 2023-12-19T12:40:54Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-19T12:36:51Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### phtben_8 Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
alitolga/gpt2-large-peft
|
alitolga
| 2023-12-19T12:32:08Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2023-12-19T12:00:54Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-large-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-peft
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.383 | 1.0 | 824 | 1.2637 |
| 1.3073 | 2.0 | 1648 | 1.1951 |
| 1.2566 | 3.0 | 2472 | 1.1491 |
| 1.2234 | 4.0 | 3296 | 1.1149 |
| 1.1816 | 5.0 | 4120 | 1.0898 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_SystemError1.0_Seed104
|
behzadnet
| 2023-12-19T12:27:13Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-19T12:27:06Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
BKat/Musical-genres-Classification-Hubert-V1-finetuned-gtzan
|
BKat
| 2023-12-19T12:25:00Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:SeyedAli/Musical-genres-Classification-Hubert-V1",
"base_model:finetune:SeyedAli/Musical-genres-Classification-Hubert-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-12-19T09:22:47Z |
---
license: apache-2.0
base_model: SeyedAli/Musical-genres-Classification-Hubert-V1
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: Musical-genres-Classification-Hubert-V1-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Musical-genres-Classification-Hubert-V1-finetuned-gtzan
This model is a fine-tuned version of [SeyedAli/Musical-genres-Classification-Hubert-V1](https://huggingface.co/SeyedAli/Musical-genres-Classification-Hubert-V1) on the GTZAN dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.0+cu102
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Kshitij2406/GPT_TestSmall
|
Kshitij2406
| 2023-12-19T12:21:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"region:us"
] | null | 2023-12-19T12:19:45Z |
---
library_name: peft
base_model: tiiuae/falcon-rw-1b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
racheilla/bert-base-indonesian-522M-finetuned-pemilu
|
racheilla
| 2023-12-19T12:12:24Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:cahya/bert-base-indonesian-522M",
"base_model:finetune:cahya/bert-base-indonesian-522M",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-19T09:21:41Z |
---
license: mit
base_model: cahya/bert-base-indonesian-522M
tags:
- generated_from_keras_callback
model-index:
- name: racheilla/bert-base-indonesian-522M-finetuned-pemilu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# racheilla/bert-base-indonesian-522M-finetuned-pemilu
This model is a fine-tuned version of [cahya/bert-base-indonesian-522M](https://huggingface.co/cahya/bert-base-indonesian-522M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2573
- Validation Loss: 3.4101
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2847 | 3.4266 | 0 |
| 3.3000 | 3.4116 | 1 |
| 3.2702 | 3.3975 | 2 |
| 3.2675 | 3.4689 | 3 |
| 3.2982 | 3.3540 | 4 |
| 3.3109 | 3.4127 | 5 |
| 3.2698 | 3.4126 | 6 |
| 3.2852 | 3.4165 | 7 |
| 3.2977 | 3.3816 | 8 |
| 3.2749 | 3.3923 | 9 |
| 3.2777 | 3.3841 | 10 |
| 3.2555 | 3.4534 | 11 |
| 3.2940 | 3.4194 | 12 |
| 3.2860 | 3.3810 | 13 |
| 3.2585 | 3.3328 | 14 |
| 3.2979 | 3.4310 | 15 |
| 3.2844 | 3.4374 | 16 |
| 3.2961 | 3.3630 | 17 |
| 3.2729 | 3.4132 | 18 |
| 3.2775 | 3.4114 | 19 |
| 3.2561 | 3.3869 | 20 |
| 3.3089 | 3.4583 | 21 |
| 3.2839 | 3.4010 | 22 |
| 3.2863 | 3.4335 | 23 |
| 3.2347 | 3.4040 | 24 |
| 3.2691 | 3.3805 | 25 |
| 3.2779 | 3.4005 | 26 |
| 3.3175 | 3.3627 | 27 |
| 3.2853 | 3.3995 | 28 |
| 3.2787 | 3.3904 | 29 |
| 3.2739 | 3.4169 | 30 |
| 3.2976 | 3.3728 | 31 |
| 3.2474 | 3.4051 | 32 |
| 3.3152 | 3.3760 | 33 |
| 3.2939 | 3.4185 | 34 |
| 3.2955 | 3.3978 | 35 |
| 3.2823 | 3.3749 | 36 |
| 3.3171 | 3.4078 | 37 |
| 3.2513 | 3.4022 | 38 |
| 3.2573 | 3.4101 | 39 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
marvelo2506/ppo-Huggy
|
marvelo2506
| 2023-12-19T12:08:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-12-19T12:08:19Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: marvelo2506/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
satani/phtben-7
|
satani
| 2023-12-19T12:06:41Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-19T12:02:42Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### phtben_7 Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
clarin-knext/RoBERTa-large-CST-finetuned
|
clarin-knext
| 2023-12-19T12:01:43Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:clarin-knext/cst_datasets",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-28T08:13:58Z |
---
license: cc-by-sa-4.0
language:
- en
metrics:
- accuracy
datasets:
- clarin-knext/cst_datasets
base_model: roberta-large
pipeline_tag: text-classification
model-index:
- name: accuracy
results:
- task:
type: text-classification
name: Text Classification
metrics:
- type: accuracy
value: 61.07
verified: false
widget:
- text: "Taking pictures can be straining for the arms. | The photographer is massaging her arm, sore from holding the lens."
example_title: "Generalization example"
- text: "The children told their parents that as they were going up to the third floor, the escalator stopped. | When we were reaching the third floor, the escalator stopped."
example_title: "Indirect speech example"
---
# Accuracy per class
<code>TODO</code>
# Usage
<code>TODO</code>
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.