pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Aratako/Japanese-Starling-ChatV-7B-RP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF/resolve/main/Japanese-Starling-ChatV-7B-RP.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw"], "datasets": ["grimulkan/LimaRP-augmented", "Aratako/Rosebleu-1on1-Dialogues-RP"], "base_model": "Aratako/Japanese-Starling-ChatV-7B-RP", "quantized_by": "mradermacher"} | mradermacher/Japanese-Starling-ChatV-7B-RP-GGUF | null | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"dataset:grimulkan/LimaRP-augmented",
"dataset:Aratako/Rosebleu-1on1-Dialogues-RP",
"base_model:Aratako/Japanese-Starling-ChatV-7B-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:00:54+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #not-for-all-audiences #nsfw #en #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Aratako/Japanese-Starling-ChatV-7B-RP #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #not-for-all-audiences #nsfw #en #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Aratako/Japanese-Starling-ChatV-7B-RP #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# gemma-2B Fine-Tuning on SAIL/Symbolic-Instruction-Tuning
This repository contains the `gemma-2B` model fine-tuned on the `sail/symbolic-instruction-tuning` dataset. The model is designed to interpret and execute symbolic instructions with improved accuracy and efficiency.
## Overview
The `gemma-2B` model, originally known for its robust language understanding capabilities, has been fine-tuned to enhance its performance on symbolic instruction data. This involves retraining the model on the `sail/symbolic-instruction-tuning` dataset, which comprises a diverse range of instructional data that tests a model's ability to follow abstract and complex directives.
## Motivation
The motivation behind fine-tuning `gemma-2B` on this particular dataset is to bridge the gap between language understanding and execution in a symbolic context. This has wide applications in areas such as code generation, automated reasoning, and more sophisticated AI instruction following.
## Getting Started
To use this model, you'll need to have an account on Hugging Face and the `transformers` library installed. You can install the library using pip:
```bash
pip install transformers
```
Once installed, you can use the following code to load and use the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "your-huggingface-username/gemma-2B-fine-tuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Now you can use the model for inference
input_text = "Your symbolic instruction here"
input_ids = tokenizer.encode(input_text, return_tensors='pt')
# Generate the output
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Fine-Tuning Process
The model was fine-tuned using the following process:
- Preprocessing: The `sail/symbolic-instruction-tuning` dataset was preprocessed to conform with the input format required by `gemma-2B`.
- Training: The model was fine-tuned using a custom training loop that monitors loss and evaluates on a held-out validation set.
- Hyperparameters: The fine-tuning used specific hyperparameters, which you can find in the `training_script.py` file.
- Evaluation: The fine-tuned model was evaluated against a benchmark to ensure that it meets our performance standards.
| {"license": "apache-2.0", "datasets": ["sail/symbolic-instruction-tuning"]} | rootsec1/gemma-2B-it-aipi | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"dataset:sail/symbolic-instruction-tuning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:01:28+00:00 | [] | [] | TAGS
#transformers #safetensors #gemma #text-generation #dataset-sail/symbolic-instruction-tuning #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# gemma-2B Fine-Tuning on SAIL/Symbolic-Instruction-Tuning
This repository contains the 'gemma-2B' model fine-tuned on the 'sail/symbolic-instruction-tuning' dataset. The model is designed to interpret and execute symbolic instructions with improved accuracy and efficiency.
## Overview
The 'gemma-2B' model, originally known for its robust language understanding capabilities, has been fine-tuned to enhance its performance on symbolic instruction data. This involves retraining the model on the 'sail/symbolic-instruction-tuning' dataset, which comprises a diverse range of instructional data that tests a model's ability to follow abstract and complex directives.
## Motivation
The motivation behind fine-tuning 'gemma-2B' on this particular dataset is to bridge the gap between language understanding and execution in a symbolic context. This has wide applications in areas such as code generation, automated reasoning, and more sophisticated AI instruction following.
## Getting Started
To use this model, you'll need to have an account on Hugging Face and the 'transformers' library installed. You can install the library using pip:
Once installed, you can use the following code to load and use the model:
## Fine-Tuning Process
The model was fine-tuned using the following process:
- Preprocessing: The 'sail/symbolic-instruction-tuning' dataset was preprocessed to conform with the input format required by 'gemma-2B'.
- Training: The model was fine-tuned using a custom training loop that monitors loss and evaluates on a held-out validation set.
- Hyperparameters: The fine-tuning used specific hyperparameters, which you can find in the 'training_script.py' file.
- Evaluation: The fine-tuned model was evaluated against a benchmark to ensure that it meets our performance standards.
| [
"# gemma-2B Fine-Tuning on SAIL/Symbolic-Instruction-Tuning\n\nThis repository contains the 'gemma-2B' model fine-tuned on the 'sail/symbolic-instruction-tuning' dataset. The model is designed to interpret and execute symbolic instructions with improved accuracy and efficiency.",
"## Overview\n\nThe 'gemma-2B' model, originally known for its robust language understanding capabilities, has been fine-tuned to enhance its performance on symbolic instruction data. This involves retraining the model on the 'sail/symbolic-instruction-tuning' dataset, which comprises a diverse range of instructional data that tests a model's ability to follow abstract and complex directives.",
"## Motivation\n\nThe motivation behind fine-tuning 'gemma-2B' on this particular dataset is to bridge the gap between language understanding and execution in a symbolic context. This has wide applications in areas such as code generation, automated reasoning, and more sophisticated AI instruction following.",
"## Getting Started\n\nTo use this model, you'll need to have an account on Hugging Face and the 'transformers' library installed. You can install the library using pip:\n\n\n\nOnce installed, you can use the following code to load and use the model:",
"## Fine-Tuning Process\n\nThe model was fine-tuned using the following process:\n\n- Preprocessing: The 'sail/symbolic-instruction-tuning' dataset was preprocessed to conform with the input format required by 'gemma-2B'.\n- Training: The model was fine-tuned using a custom training loop that monitors loss and evaluates on a held-out validation set.\n- Hyperparameters: The fine-tuning used specific hyperparameters, which you can find in the 'training_script.py' file.\n- Evaluation: The fine-tuned model was evaluated against a benchmark to ensure that it meets our performance standards."
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #dataset-sail/symbolic-instruction-tuning #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# gemma-2B Fine-Tuning on SAIL/Symbolic-Instruction-Tuning\n\nThis repository contains the 'gemma-2B' model fine-tuned on the 'sail/symbolic-instruction-tuning' dataset. The model is designed to interpret and execute symbolic instructions with improved accuracy and efficiency.",
"## Overview\n\nThe 'gemma-2B' model, originally known for its robust language understanding capabilities, has been fine-tuned to enhance its performance on symbolic instruction data. This involves retraining the model on the 'sail/symbolic-instruction-tuning' dataset, which comprises a diverse range of instructional data that tests a model's ability to follow abstract and complex directives.",
"## Motivation\n\nThe motivation behind fine-tuning 'gemma-2B' on this particular dataset is to bridge the gap between language understanding and execution in a symbolic context. This has wide applications in areas such as code generation, automated reasoning, and more sophisticated AI instruction following.",
"## Getting Started\n\nTo use this model, you'll need to have an account on Hugging Face and the 'transformers' library installed. You can install the library using pip:\n\n\n\nOnce installed, you can use the following code to load and use the model:",
"## Fine-Tuning Process\n\nThe model was fine-tuned using the following process:\n\n- Preprocessing: The 'sail/symbolic-instruction-tuning' dataset was preprocessed to conform with the input format required by 'gemma-2B'.\n- Training: The model was fine-tuned using a custom training loop that monitors loss and evaluates on a held-out validation set.\n- Hyperparameters: The fine-tuning used specific hyperparameters, which you can find in the 'training_script.py' file.\n- Evaluation: The fine-tuned model was evaluated against a benchmark to ensure that it meets our performance standards."
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) as a base.
### Models Merged
The following models were included in the merge:
* [l3utterfly/tinyllama-1.1b-layla-v4](https://huggingface.co/l3utterfly/tinyllama-1.1b-layla-v4)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
#no parameters necessary for base model
- model: vihangd/DopeyTinyLlama-1.1B-v1
parameters:
density: 0.50
weight: 0.20
- model: l3utterfly/tinyllama-1.1b-layla-v4
parameters:
density: 0.50
weight: 0.30
- model: appvoid/palmer-003
parameters:
density: 0.50
weight: 0.50
merge_method: ties
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
parameters:
normalize: false
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["l3utterfly/tinyllama-1.1b-layla-v4", "appvoid/palmer-003", "vihangd/DopeyTinyLlama-1.1B-v1", "TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T"]} | appvoid/palmer-instruct-test-13 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:l3utterfly/tinyllama-1.1b-layla-v4",
"base_model:appvoid/palmer-003",
"base_model:vihangd/DopeyTinyLlama-1.1B-v1",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:03:54+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-appvoid/palmer-003 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.
### Models Merged
The following models were included in the merge:
* l3utterfly/tinyllama-1.1b-layla-v4
* appvoid/palmer-003
* vihangd/DopeyTinyLlama-1.1B-v1
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* l3utterfly/tinyllama-1.1b-layla-v4\n* appvoid/palmer-003\n* vihangd/DopeyTinyLlama-1.1B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-appvoid/palmer-003 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* l3utterfly/tinyllama-1.1b-layla-v4\n* appvoid/palmer-003\n* vihangd/DopeyTinyLlama-1.1B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
zero-shot-image-classification | transformers |
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
## Model Details
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
### Model Date
January 2021
### Model Type
The base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer.
### Documents
- [Blog Post](https://openai.com/blog/clip/)
- [CLIP Paper](https://arxiv.org/abs/2103.00020)
### Use with Transformers
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
### Out-of-Scope Use Cases
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
## Data
The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
### Data Mission Statement
Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
## Performance and Limitations
### Performance
We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
- Food101
- CIFAR10
- CIFAR100
- Birdsnap
- SUN397
- Stanford Cars
- FGVC Aircraft
- VOC2007
- DTD
- Oxford-IIIT Pet dataset
- Caltech101
- Flowers102
- MNIST
- SVHN
- IIIT5K
- Hateful Memes
- SST-2
- UCF101
- Kinetics700
- Country211
- CLEVR Counting
- KITTI Distance
- STL-10
- RareAct
- Flickr30
- MSCOCO
- ImageNet
- ImageNet-A
- ImageNet-R
- ImageNet Sketch
- ObjectNet (ImageNet Overlap)
- Youtube-BB
- ImageNet-Vid
## Limitations
CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
### Bias and Fairness
We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
## Feedback
### Where to send questions or comments about the model
Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9) | {"tags": ["vision"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png", "candidate_labels": "playing music, playing sports", "example_title": "Cat & Dog"}]} | polypo/openai-clip-vit-large-patch14 | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"clip",
"zero-shot-image-classification",
"vision",
"arxiv:2103.00020",
"arxiv:1908.04913",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:04:33+00:00 | [
"2103.00020",
"1908.04913"
] | [] | TAGS
#transformers #pytorch #tf #jax #safetensors #clip #zero-shot-image-classification #vision #arxiv-2103.00020 #arxiv-1908.04913 #endpoints_compatible #region-us
|
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found here.
## Model Details
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
### Model Date
January 2021
### Model Type
The base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer.
### Documents
- Blog Post
- CLIP Paper
### Use with Transformers
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
### Out-of-Scope Use Cases
Any deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
## Data
The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as YFCC100M. A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
### Data Mission Statement
Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
## Performance and Limitations
### Performance
We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
- Food101
- CIFAR10
- CIFAR100
- Birdsnap
- SUN397
- Stanford Cars
- FGVC Aircraft
- VOC2007
- DTD
- Oxford-IIIT Pet dataset
- Caltech101
- Flowers102
- MNIST
- SVHN
- IIIT5K
- Hateful Memes
- SST-2
- UCF101
- Kinetics700
- Country211
- CLEVR Counting
- KITTI Distance
- STL-10
- RareAct
- Flickr30
- MSCOCO
- ImageNet
- ImageNet-A
- ImageNet-R
- ImageNet Sketch
- ObjectNet (ImageNet Overlap)
- Youtube-BB
- ImageNet-Vid
## Limitations
CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
### Bias and Fairness
We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from Fairface into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
## Feedback
### Where to send questions or comments about the model
Please use this Google Form | [
"# Model Card: CLIP\n\nDisclaimer: The model card is taken and modified from the official CLIP repository, it can be found here.",
"## Model Details\n\nThe CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.",
"### Model Date\n\nJanuary 2021",
"### Model Type\n\nThe base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.\n\nThe original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer.",
"### Documents\n\n- Blog Post\n- CLIP Paper",
"### Use with Transformers",
"## Model Use",
"### Intended Use\n\nThe model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.",
"#### Primary intended uses\n\nThe primary intended users of these models are AI researchers.\n\nWe primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.",
"### Out-of-Scope Use Cases\n\nAny deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. \n\nCertain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.\n\nSince the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.",
"## Data\n\nThe model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as YFCC100M. A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.",
"### Data Mission Statement\n\nOur goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.",
"## Performance and Limitations",
"### Performance\n\nWe have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:\n\n- Food101\n- CIFAR10 \n- CIFAR100 \n- Birdsnap\n- SUN397\n- Stanford Cars\n- FGVC Aircraft\n- VOC2007\n- DTD\n- Oxford-IIIT Pet dataset\n- Caltech101\n- Flowers102\n- MNIST \n- SVHN \n- IIIT5K \n- Hateful Memes \n- SST-2\n- UCF101\n- Kinetics700\n- Country211\n- CLEVR Counting\n- KITTI Distance\n- STL-10\n- RareAct\n- Flickr30\n- MSCOCO\n- ImageNet\n- ImageNet-A\n- ImageNet-R\n- ImageNet Sketch\n- ObjectNet (ImageNet Overlap)\n- Youtube-BB\n- ImageNet-Vid",
"## Limitations\n\nCLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.",
"### Bias and Fairness\n\nWe find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from Fairface into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).\n\nWe also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.",
"## Feedback",
"### Where to send questions or comments about the model\n\nPlease use this Google Form"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #clip #zero-shot-image-classification #vision #arxiv-2103.00020 #arxiv-1908.04913 #endpoints_compatible #region-us \n",
"# Model Card: CLIP\n\nDisclaimer: The model card is taken and modified from the official CLIP repository, it can be found here.",
"## Model Details\n\nThe CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.",
"### Model Date\n\nJanuary 2021",
"### Model Type\n\nThe base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.\n\nThe original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer.",
"### Documents\n\n- Blog Post\n- CLIP Paper",
"### Use with Transformers",
"## Model Use",
"### Intended Use\n\nThe model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.",
"#### Primary intended uses\n\nThe primary intended users of these models are AI researchers.\n\nWe primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.",
"### Out-of-Scope Use Cases\n\nAny deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. \n\nCertain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.\n\nSince the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.",
"## Data\n\nThe model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as YFCC100M. A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.",
"### Data Mission Statement\n\nOur goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.",
"## Performance and Limitations",
"### Performance\n\nWe have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:\n\n- Food101\n- CIFAR10 \n- CIFAR100 \n- Birdsnap\n- SUN397\n- Stanford Cars\n- FGVC Aircraft\n- VOC2007\n- DTD\n- Oxford-IIIT Pet dataset\n- Caltech101\n- Flowers102\n- MNIST \n- SVHN \n- IIIT5K \n- Hateful Memes \n- SST-2\n- UCF101\n- Kinetics700\n- Country211\n- CLEVR Counting\n- KITTI Distance\n- STL-10\n- RareAct\n- Flickr30\n- MSCOCO\n- ImageNet\n- ImageNet-A\n- ImageNet-R\n- ImageNet Sketch\n- ObjectNet (ImageNet Overlap)\n- Youtube-BB\n- ImageNet-Vid",
"## Limitations\n\nCLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.",
"### Bias and Fairness\n\nWe find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from Fairface into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).\n\nWe also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.",
"## Feedback",
"### Where to send questions or comments about the model\n\nPlease use this Google Form"
] |
zero-shot-image-classification | open_clip | # Model Card for CLIP ViT-bigG/14 - LAION-2B
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
A CLIP ViT-bigG/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Model training done by Mitchell Wortsman on the [stability.ai](https://stability.ai/) cluster.
The license for this model is MIT.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
Fine-tuning was also partially done on LAION-A, a 900M subset of LAION-2B filtered with aesthetic V2 4.5+ and phash deduplicated.
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
The training procedure will soon be discussed by a blog post on laion.ai.
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
**TODO** - more detail
## Results
The model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k.
An initial round of benchmarks have been performed on a wider range of datasets, and will soon be visible at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
**TODO** - create table for just this model's metrics.
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model.
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenAI CLIP paper
```
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
OpenCLIP software
```
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
Scaling OpenCLIP paper
```
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
# How to Get Started with the Model
Use the code below to get started with the model.
** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets | {"license": "mit", "library_name": "open_clip", "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png", "candidate_labels": "playing music, playing sports", "example_title": "Cat & Dog"}], "pipeline_tag": "zero-shot-image-classification"} | polypo/laion-CLIP-ViT-bigG-14-laion2B-39B-b160k | null | [
"open_clip",
"pytorch",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.04867",
"license:mit",
"region:us"
] | null | 2024-04-18T01:05:16+00:00 | [
"1910.04867"
] | [] | TAGS
#open_clip #pytorch #safetensors #clip #zero-shot-image-classification #arxiv-1910.04867 #license-mit #region-us
| # Model Card for CLIP ViT-bigG/14 - LAION-2B
# Table of Contents
1. Model Details
2. Uses
3. Training Details
4. Evaluation
5. Acknowledgements
6. Citation
7. How To Get Started With the Model
# Model Details
## Model Description
A CLIP ViT-bigG/14 model trained with the LAION-2B English subset of LAION-5B (URL using OpenCLIP (URL
Model training done by Mitchell Wortsman on the URL cluster.
The license for this model is MIT.
# Uses
As per the original OpenAI CLIP model card, this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (URL and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
Any deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with the 2 Billion sample English subset of LAION-5B (URL
Fine-tuning was also partially done on LAION-A, a 900M subset of LAION-2B filtered with aesthetic V2 4.5+ and phash deduplicated.
IMPORTANT NOTE: The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
The training procedure will soon be discussed by a blog post on URL.
# Evaluation
Evaluation done with code in the LAION CLIP Benchmark suite.
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (URL w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
TODO - more detail
## Results
The model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k.
An initial round of benchmarks have been performed on a wider range of datasets, and will soon be visible at URL
TODO - create table for just this model's metrics.
# Acknowledgements
Acknowledging URL for the compute used to train this model.
BibTeX:
LAION-5B
OpenAI CLIP paper
OpenCLIP software
Scaling OpenCLIP paper
# How to Get Started with the Model
Use the code below to get started with the model.
TODO - Hugging Face transformers, OpenCLIP, and timm getting started snippets | [
"# Model Card for CLIP ViT-bigG/14 - LAION-2B",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Training Details\n4. Evaluation\n5. Acknowledgements\n6. Citation\n7. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nA CLIP ViT-bigG/14 model trained with the LAION-2B English subset of LAION-5B (URL using OpenCLIP (URL\n\nModel training done by Mitchell Wortsman on the URL cluster.\n\nThe license for this model is MIT.",
"# Uses\n\nAs per the original OpenAI CLIP model card, this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. \n\nThe OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (URL and upcoming paper include additional discussion as it relates specifically to the training dataset.",
"## Direct Use\n\nZero-shot image classification, image and text retrieval, among others.",
"## Downstream Use\n\nImage classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.",
"## Out-of-Scope Use\n\nAs per the OpenAI models,\n\nAny deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. \n\nCertain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.\n\nSince the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.\n\nFurther the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.",
"# Training Details",
"## Training Data\n\nThis model was trained with the 2 Billion sample English subset of LAION-5B (URL \nFine-tuning was also partially done on LAION-A, a 900M subset of LAION-2B filtered with aesthetic V2 4.5+ and phash deduplicated.\n\nIMPORTANT NOTE: The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.",
"## Training Procedure\n\nThe training procedure will soon be discussed by a blog post on URL.",
"# Evaluation\n\nEvaluation done with code in the LAION CLIP Benchmark suite.",
"## Testing Data, Factors & Metrics",
"### Testing Data\n\nThe testing is performed with VTAB+ (A combination of VTAB (URL w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.\n\nTODO - more detail",
"## Results\n\nThe model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k.\n\nAn initial round of benchmarks have been performed on a wider range of datasets, and will soon be visible at URL\n\nTODO - create table for just this model's metrics.",
"# Acknowledgements\n\nAcknowledging URL for the compute used to train this model.\n\nBibTeX:\n\nLAION-5B\n\n\nOpenAI CLIP paper\n\n\nOpenCLIP software\n\n\nScaling OpenCLIP paper",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n TODO - Hugging Face transformers, OpenCLIP, and timm getting started snippets"
] | [
"TAGS\n#open_clip #pytorch #safetensors #clip #zero-shot-image-classification #arxiv-1910.04867 #license-mit #region-us \n",
"# Model Card for CLIP ViT-bigG/14 - LAION-2B",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Training Details\n4. Evaluation\n5. Acknowledgements\n6. Citation\n7. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nA CLIP ViT-bigG/14 model trained with the LAION-2B English subset of LAION-5B (URL using OpenCLIP (URL\n\nModel training done by Mitchell Wortsman on the URL cluster.\n\nThe license for this model is MIT.",
"# Uses\n\nAs per the original OpenAI CLIP model card, this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. \n\nThe OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (URL and upcoming paper include additional discussion as it relates specifically to the training dataset.",
"## Direct Use\n\nZero-shot image classification, image and text retrieval, among others.",
"## Downstream Use\n\nImage classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.",
"## Out-of-Scope Use\n\nAs per the OpenAI models,\n\nAny deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. \n\nCertain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.\n\nSince the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.\n\nFurther the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.",
"# Training Details",
"## Training Data\n\nThis model was trained with the 2 Billion sample English subset of LAION-5B (URL \nFine-tuning was also partially done on LAION-A, a 900M subset of LAION-2B filtered with aesthetic V2 4.5+ and phash deduplicated.\n\nIMPORTANT NOTE: The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.",
"## Training Procedure\n\nThe training procedure will soon be discussed by a blog post on URL.",
"# Evaluation\n\nEvaluation done with code in the LAION CLIP Benchmark suite.",
"## Testing Data, Factors & Metrics",
"### Testing Data\n\nThe testing is performed with VTAB+ (A combination of VTAB (URL w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.\n\nTODO - more detail",
"## Results\n\nThe model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k.\n\nAn initial round of benchmarks have been performed on a wider range of datasets, and will soon be visible at URL\n\nTODO - create table for just this model's metrics.",
"# Acknowledgements\n\nAcknowledging URL for the compute used to train this model.\n\nBibTeX:\n\nLAION-5B\n\n\nOpenAI CLIP paper\n\n\nOpenCLIP software\n\n\nScaling OpenCLIP paper",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n TODO - Hugging Face transformers, OpenCLIP, and timm getting started snippets"
] |
image-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | aneeshks/vit-base-patch16-224-in21k-lora-cifar100 | null | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:07:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vit #image-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vit #image-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-IberAuTexTification2024-9010-task2-v1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7095
- F1: 0.7742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5724 | 1.0 | 3305 | 0.8187 | 0.7017 |
| 0.3546 | 2.0 | 6610 | 0.7095 | 0.7742 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-IberAuTexTification2024-9010-task2-v1", "results": []}]} | vg055/xlm-roberta-base-finetuned-IberAuTexTification2024-9010-task2-v1 | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:08:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-IberAuTexTification2024-9010-task2-v1
================================================================
This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7095
* F1: 0.7742
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.28.0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] |
null | null |
# MrAiran/pythia-13b-deduped-green_devil-Q4_K_S-GGUF
This model was converted to GGUF format from [`Pirr/pythia-13b-deduped-green_devil`](https://huggingface.co/Pirr/pythia-13b-deduped-green_devil) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Pirr/pythia-13b-deduped-green_devil) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo MrAiran/pythia-13b-deduped-green_devil-Q4_K_S-GGUF --model pythia-13b-deduped-green_devil.Q4_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo MrAiran/pythia-13b-deduped-green_devil-Q4_K_S-GGUF --model pythia-13b-deduped-green_devil.Q4_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-13b-deduped-green_devil.Q4_K_S.gguf -n 128
```
| {"tags": ["llama-cpp", "gguf-my-repo"]} | MrAiran/pythia-13b-deduped-green_devil-Q4_K_S-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"region:us"
] | null | 2024-04-18T01:12:17+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #region-us
|
# MrAiran/pythia-13b-deduped-green_devil-Q4_K_S-GGUF
This model was converted to GGUF format from 'Pirr/pythia-13b-deduped-green_devil' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# MrAiran/pythia-13b-deduped-green_devil-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'Pirr/pythia-13b-deduped-green_devil' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n",
"# MrAiran/pythia-13b-deduped-green_devil-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'Pirr/pythia-13b-deduped-green_devil' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers | # nbeerbower/bophades-mistral-truthy-DPO-7B AWQ
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
- Original model: [bophades-mistral-truthy-DPO-7B](https://huggingface.co/nbeerbower/bophades-mistral-truthy-DPO-7B)

## Model Summary
[bophades-v2-mistral-7B](https://huggingface.co/nbeerbower/bophades-v2-mistral-7B) finetuned on [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1).
Finetuned using an A100 on Google Colab. 🙏
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "finetuned", "mistral"], "datasets": ["jondurbin/truthy-dpo-v0.1"], "base_model": ["nbeerbower/bophades-v2-mistral-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/bophades-mistral-truthy-DPO-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"finetuned",
"dataset:jondurbin/truthy-dpo-v0.1",
"base_model:nbeerbower/bophades-v2-mistral-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:13:50+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #finetuned #dataset-jondurbin/truthy-dpo-v0.1 #base_model-nbeerbower/bophades-v2-mistral-7B #license-apache-2.0 #text-generation-inference #region-us
| # nbeerbower/bophades-mistral-truthy-DPO-7B AWQ
- Model creator: nbeerbower
- Original model: bophades-mistral-truthy-DPO-7B
!image/png
## Model Summary
bophades-v2-mistral-7B finetuned on jondurbin/truthy-dpo-v0.1.
Finetuned using an A100 on Google Colab.
Fine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne
| [
"# nbeerbower/bophades-mistral-truthy-DPO-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: bophades-mistral-truthy-DPO-7B\n\n!image/png",
"## Model Summary\n\nbophades-v2-mistral-7B finetuned on jondurbin/truthy-dpo-v0.1. \n\nFinetuned using an A100 on Google Colab. \n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #finetuned #dataset-jondurbin/truthy-dpo-v0.1 #base_model-nbeerbower/bophades-v2-mistral-7B #license-apache-2.0 #text-generation-inference #region-us \n",
"# nbeerbower/bophades-mistral-truthy-DPO-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: bophades-mistral-truthy-DPO-7B\n\n!image/png",
"## Model Summary\n\nbophades-v2-mistral-7B finetuned on jondurbin/truthy-dpo-v0.1. \n\nFinetuned using an A100 on Google Colab. \n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
* [l3utterfly/tinyllama-1.1b-layla-v4](https://huggingface.co/l3utterfly/tinyllama-1.1b-layla-v4)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
#no parameters necessary for base model
- model: vihangd/DopeyTinyLlama-1.1B-v1
parameters:
density: 0.10
weight: 0.60
- model: l3utterfly/tinyllama-1.1b-layla-v4
parameters:
density: 0.39
weight: 0.50
- model: appvoid/palmer-003
parameters:
density: 0.60
weight: 0.40
merge_method: ties
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
parameters:
normalize: true
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["appvoid/palmer-003", "l3utterfly/tinyllama-1.1b-layla-v4", "TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T", "vihangd/DopeyTinyLlama-1.1B-v1"]} | appvoid/palmer-instruct-test-14 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:appvoid/palmer-003",
"base_model:l3utterfly/tinyllama-1.1b-layla-v4",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T",
"base_model:vihangd/DopeyTinyLlama-1.1B-v1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:14:32+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-appvoid/palmer-003 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.
### Models Merged
The following models were included in the merge:
* appvoid/palmer-003
* l3utterfly/tinyllama-1.1b-layla-v4
* vihangd/DopeyTinyLlama-1.1B-v1
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* appvoid/palmer-003\n* l3utterfly/tinyllama-1.1b-layla-v4\n* vihangd/DopeyTinyLlama-1.1B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-appvoid/palmer-003 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* appvoid/palmer-003\n* l3utterfly/tinyllama-1.1b-layla-v4\n* vihangd/DopeyTinyLlama-1.1B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ahforoughi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | ahforoughi/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-18T01:16:06+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/dumbo-krillin48 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:16:10+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) as a base.
### Models Merged
The following models were included in the merge:
* [l3utterfly/tinyllama-1.1b-layla-v4](https://huggingface.co/l3utterfly/tinyllama-1.1b-layla-v4)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
#no parameters necessary for base model
- model: vihangd/DopeyTinyLlama-1.1B-v1
parameters:
density: 0.50
weight: 0.60
- model: l3utterfly/tinyllama-1.1b-layla-v4
parameters:
density: 0.60
weight: 0.50
- model: appvoid/palmer-003
parameters:
density: 0.40
weight: 0.40
merge_method: ties
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
parameters:
normalize: true
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["l3utterfly/tinyllama-1.1b-layla-v4", "vihangd/DopeyTinyLlama-1.1B-v1", "TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T", "appvoid/palmer-003"]} | appvoid/palmer-instruct-test-15 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:l3utterfly/tinyllama-1.1b-layla-v4",
"base_model:vihangd/DopeyTinyLlama-1.1B-v1",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T",
"base_model:appvoid/palmer-003",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:17:37+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #base_model-appvoid/palmer-003 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.
### Models Merged
The following models were included in the merge:
* l3utterfly/tinyllama-1.1b-layla-v4
* vihangd/DopeyTinyLlama-1.1B-v1
* appvoid/palmer-003
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* l3utterfly/tinyllama-1.1b-layla-v4\n* vihangd/DopeyTinyLlama-1.1B-v1\n* appvoid/palmer-003",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #base_model-appvoid/palmer-003 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* l3utterfly/tinyllama-1.1b-layla-v4\n* vihangd/DopeyTinyLlama-1.1B-v1\n* appvoid/palmer-003",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | TeeA/codeLlama_text2sql_syll_r128 | null | [
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:18:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #tensorboard #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #tensorboard #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # nbeerbower/flammen17-mistral-7B AWQ
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
- Original model: [flammen17-mistral-7B](https://huggingface.co/nbeerbower/flammen17-mistral-7B)

## Model Summary
A Mistral 7B LLM built from merging pretrained models and finetuning.
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was merged using the SLERP merge method.
The following models were included in the merge:
* [nbeerbower/Flammen-Bophades-7B](https://huggingface.co/nbeerbower/Flammen-Bophades-7B)
* [nbeerbower/flammen16-mistral-7B](https://huggingface.co/nbeerbower/flammen16-mistral-7B)
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "mergekit", "merge"], "base_model": ["nbeerbower/Flammen-Bophades-7B", "nbeerbower/flammen16-mistral-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/flammen17-mistral-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"mergekit",
"merge",
"base_model:nbeerbower/Flammen-Bophades-7B",
"base_model:nbeerbower/flammen16-mistral-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:19:14+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #mergekit #merge #base_model-nbeerbower/Flammen-Bophades-7B #base_model-nbeerbower/flammen16-mistral-7B #license-apache-2.0 #text-generation-inference #region-us
| # nbeerbower/flammen17-mistral-7B AWQ
- Model creator: nbeerbower
- Original model: flammen17-mistral-7B
!image/png
## Model Summary
A Mistral 7B LLM built from merging pretrained models and finetuning.
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SLERP merge method.
The following models were included in the merge:
* nbeerbower/Flammen-Bophades-7B
* nbeerbower/flammen16-mistral-7B
| [
"# nbeerbower/flammen17-mistral-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: flammen17-mistral-7B\n\n!image/png",
"## Model Summary\n\nA Mistral 7B LLM built from merging pretrained models and finetuning.\nFlammen specializes in exceptional character roleplay, creative writing, and general intelligence\n\nThis is a merge of pre-trained language models created using mergekit.\n\nThis model was merged using the SLERP merge method.\n\nThe following models were included in the merge:\n* nbeerbower/Flammen-Bophades-7B\n* nbeerbower/flammen16-mistral-7B"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #mergekit #merge #base_model-nbeerbower/Flammen-Bophades-7B #base_model-nbeerbower/flammen16-mistral-7B #license-apache-2.0 #text-generation-inference #region-us \n",
"# nbeerbower/flammen17-mistral-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: flammen17-mistral-7B\n\n!image/png",
"## Model Summary\n\nA Mistral 7B LLM built from merging pretrained models and finetuning.\nFlammen specializes in exceptional character roleplay, creative writing, and general intelligence\n\nThis is a merge of pre-trained language models created using mergekit.\n\nThis model was merged using the SLERP merge method.\n\nThe following models were included in the merge:\n* nbeerbower/Flammen-Bophades-7B\n* nbeerbower/flammen16-mistral-7B"
] |
null | transformers |
# Uploaded model
- **Developed by:** InferenceIllusionist
- **License:** apache-2.0
- **Finetuned from model :** mistral-community/Mistral-7B-v0.2
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "mistral-community/Mistral-7B-v0.2"} | InferenceIllusionist/lora_mistral-7b-RealWorldQA-v0.2 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:mistral-community/Mistral-7B-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:19:35+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-mistral-community/Mistral-7B-v0.2 #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: InferenceIllusionist
- License: apache-2.0
- Finetuned from model : mistral-community/Mistral-7B-v0.2
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: InferenceIllusionist\n- License: apache-2.0\n- Finetuned from model : mistral-community/Mistral-7B-v0.2\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-mistral-community/Mistral-7B-v0.2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: InferenceIllusionist\n- License: apache-2.0\n- Finetuned from model : mistral-community/Mistral-7B-v0.2\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ahforoughi/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | ahforoughi/taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-18T01:19:39+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kreas/Mistral-7B-v0.1-GPTQ-2bit | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"2-bit",
"region:us"
] | null | 2024-04-18T01:20:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #2-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #2-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Gemma 2B Translation v0.102
- Eval Loss: `1.35643`
- Train Loss: `1.46109`
- lr: `3e-05`
- optimizer: adamw
- lr_scheduler_type: cosine
## Prompt Template
```
<bos>### English
Hamsters don't eat cats.
### Korean
햄스터는 고양이를 먹지 않습니다.<eos>
```
## Model Description
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [beomi/gemma-ko-2b](https://huggingface.co/beomi/gemma-ko-2b)
| {"language": ["ko"], "license": "gemma", "library_name": "transformers", "tags": ["gemma", "pytorch", "instruct", "finetune", "translation"], "datasets": ["traintogpb/aihub-flores-koen-integrated-sparta-30k"], "widget": [{"messages": [{"role": "user", "content": "Hamsters don't eat cats."}]}], "inference": {"parameters": {"max_new_tokens": 2048}}, "base_model": "beomi/gemma-ko-2b", "pipeline_tag": "text-generation"} | lemon-mint/gemma-2b-translation-v0.102 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"pytorch",
"instruct",
"finetune",
"translation",
"conversational",
"ko",
"dataset:traintogpb/aihub-flores-koen-integrated-sparta-30k",
"base_model:beomi/gemma-ko-2b",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:21:10+00:00 | [] | [
"ko"
] | TAGS
#transformers #safetensors #gemma #text-generation #pytorch #instruct #finetune #translation #conversational #ko #dataset-traintogpb/aihub-flores-koen-integrated-sparta-30k #base_model-beomi/gemma-ko-2b #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Gemma 2B Translation v0.102
- Eval Loss: '1.35643'
- Train Loss: '1.46109'
- lr: '3e-05'
- optimizer: adamw
- lr_scheduler_type: cosine
## Prompt Template
## Model Description
- Developed by: 'lemon-mint'
- Model type: Gemma
- Language(s) (NLP): English
- License: gemma-terms-of-use
- Finetuned from model: beomi/gemma-ko-2b
| [
"# Gemma 2B Translation v0.102\n\n- Eval Loss: '1.35643'\n- Train Loss: '1.46109'\n- lr: '3e-05'\n- optimizer: adamw\n- lr_scheduler_type: cosine",
"## Prompt Template",
"## Model Description\n\n- Developed by: 'lemon-mint'\n- Model type: Gemma\n- Language(s) (NLP): English\n- License: gemma-terms-of-use\n- Finetuned from model: beomi/gemma-ko-2b"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #pytorch #instruct #finetune #translation #conversational #ko #dataset-traintogpb/aihub-flores-koen-integrated-sparta-30k #base_model-beomi/gemma-ko-2b #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Gemma 2B Translation v0.102\n\n- Eval Loss: '1.35643'\n- Train Loss: '1.46109'\n- lr: '3e-05'\n- optimizer: adamw\n- lr_scheduler_type: cosine",
"## Prompt Template",
"## Model Description\n\n- Developed by: 'lemon-mint'\n- Model type: Gemma\n- Language(s) (NLP): English\n- License: gemma-terms-of-use\n- Finetuned from model: beomi/gemma-ko-2b"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
* [l3utterfly/tinyllama-1.1b-layla-v4](https://huggingface.co/l3utterfly/tinyllama-1.1b-layla-v4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/palmer-003
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: l3utterfly/tinyllama-1.1b-layla-v4
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: vihangd/DopeyTinyLlama-1.1B-v1
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
parameters:
normalize: true
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["appvoid/palmer-003", "vihangd/DopeyTinyLlama-1.1B-v1", "l3utterfly/tinyllama-1.1b-layla-v4", "TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T"]} | appvoid/palmer-instruct-test-16 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:appvoid/palmer-003",
"base_model:vihangd/DopeyTinyLlama-1.1B-v1",
"base_model:l3utterfly/tinyllama-1.1b-layla-v4",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:21:37+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-appvoid/palmer-003 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.
### Models Merged
The following models were included in the merge:
* appvoid/palmer-003
* vihangd/DopeyTinyLlama-1.1B-v1
* l3utterfly/tinyllama-1.1b-layla-v4
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* appvoid/palmer-003\n* vihangd/DopeyTinyLlama-1.1B-v1\n* l3utterfly/tinyllama-1.1b-layla-v4",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-appvoid/palmer-003 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* appvoid/palmer-003\n* vihangd/DopeyTinyLlama-1.1B-v1\n* l3utterfly/tinyllama-1.1b-layla-v4",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers | # nbeerbower/flammen17-py-DPO-v1-7B AWQ
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
- Original model: [flammen17-py-DPO-v1-7B](https://huggingface.co/nbeerbower/flammen17-py-DPO-v1-7B)

## Model Summary
A Mistral 7B LLM built from merging pretrained models and finetuning on [Jon Durbin](https://huggingface.co/jondurbin)'s [py-dpo-v0.1](https://huggingface.co/datasets/jondurbin/py-dpo-v0.1).
Finetuned using an A100 on Google Colab. 🙏
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "experimental"], "datasets": ["jondurbin/py-dpo-v0.1"], "base_model": ["nbeerbower/flammen17-mistral-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/flammen17-py-DPO-v1-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"experimental",
"dataset:jondurbin/py-dpo-v0.1",
"base_model:nbeerbower/flammen17-mistral-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:23:15+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #experimental #dataset-jondurbin/py-dpo-v0.1 #base_model-nbeerbower/flammen17-mistral-7B #license-apache-2.0 #text-generation-inference #region-us
| # nbeerbower/flammen17-py-DPO-v1-7B AWQ
- Model creator: nbeerbower
- Original model: flammen17-py-DPO-v1-7B
!image/png
## Model Summary
A Mistral 7B LLM built from merging pretrained models and finetuning on Jon Durbin's py-dpo-v0.1.
Finetuned using an A100 on Google Colab.
Fine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne
| [
"# nbeerbower/flammen17-py-DPO-v1-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: flammen17-py-DPO-v1-7B\n\n!image/png",
"## Model Summary\n\nA Mistral 7B LLM built from merging pretrained models and finetuning on Jon Durbin's py-dpo-v0.1.\n\nFinetuned using an A100 on Google Colab. \n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #experimental #dataset-jondurbin/py-dpo-v0.1 #base_model-nbeerbower/flammen17-mistral-7B #license-apache-2.0 #text-generation-inference #region-us \n",
"# nbeerbower/flammen17-py-DPO-v1-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: flammen17-py-DPO-v1-7B\n\n!image/png",
"## Model Summary\n\nA Mistral 7B LLM built from merging pretrained models and finetuning on Jon Durbin's py-dpo-v0.1.\n\nFinetuned using an A100 on Google Colab. \n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne"
] |
text-generation | transformers | # Mistral-RealworldQA-v0.2-7b SFT
<img src="https://i.imgur.com/Pf53ms5.jpeg" width="400"/>
GGUFs can be found [here](https://huggingface.co/InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT-GGUF)
An experiment with the goal of reducing hallucinations in [VQA](https://huggingface.co/tasks/visual-question-answering)
First in a series of experiments centering around fine-tuning for image captioning.
<h1>Release Notes</h1>
* v0.1 - Initial Release
* <b>v0.2</b> (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating
<h2>Background & Methodology</h2>
Mistral-7b-02 base model was fine-tuned using the [RealWorldQA dataset](https://huggingface.co/datasets/visheratin/realworldqa), originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v
<h1>Vision Results</h1>
Example 1
<img src="https://i.imgur.com/E9mS4Xb.jpeg" width="400"/>
Example 2
<img src="https://i.imgur.com/SmTz1Yd.jpeg" width="400"/>
* Experiment yielded model that provides shorter, less verbose output for questions about pictures
* The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user
* Best suited for captioning use cases that require concise descriptions and low token counts
* This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
1. [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
2. [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT/resolve/main/mmproj-model-f16.gguf?download=true)
Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
<img src="https://i.imgur.com/x8vqH29.png" width="425"/>
## Prompt Format
Use Alpaca for best results.
## Other info
- **Developed by:** InferenceIllusionist
- **License:** apache-2.0
- **Finetuned from model :** mistral-community/Mistral-7B-v0.2
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "datasets": ["visheratin/realworldqa"], "base_model": "unsloth/mistral-7b-v0.2-bnb-4bit"} | InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT | null | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"dataset:visheratin/realworldqa",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:24:37+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #gguf #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #dataset-visheratin/realworldqa #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # Mistral-RealworldQA-v0.2-7b SFT
<img src="https://i.URL width="400"/>
GGUFs can be found here
An experiment with the goal of reducing hallucinations in VQA
First in a series of experiments centering around fine-tuning for image captioning.
<h1>Release Notes</h1>
* v0.1 - Initial Release
* <b>v0.2</b> (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating
<h2>Background & Methodology</h2>
Mistral-7b-02 base model was fine-tuned using the RealWorldQA dataset, originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v
<h1>Vision Results</h1>
Example 1
<img src="https://i.URL width="400"/>
Example 2
<img src="https://i.URL width="400"/>
* Experiment yielded model that provides shorter, less verbose output for questions about pictures
* The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user
* Best suited for captioning use cases that require concise descriptions and low token counts
* This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
1. Quantized - Limited VRAM Option (197mb)
2. Unquantized - Premium Option / Best Quality (596mb)
Select the gguf file of your choice in Koboldcpp as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
<img src="https://i.URL width="425"/>
## Prompt Format
Use Alpaca for best results.
## Other info
- Developed by: InferenceIllusionist
- License: apache-2.0
- Finetuned from model : mistral-community/Mistral-7B-v0.2
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# Mistral-RealworldQA-v0.2-7b SFT\n\n<img src=\"https://i.URL width=\"400\"/>\n\nGGUFs can be found here\n\n\nAn experiment with the goal of reducing hallucinations in VQA\n\nFirst in a series of experiments centering around fine-tuning for image captioning.\n\n<h1>Release Notes</h1>\n\n* v0.1 - Initial Release\n* <b>v0.2</b> (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating\n\n<h2>Background & Methodology</h2>\n\nMistral-7b-02 base model was fine-tuned using the RealWorldQA dataset, originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v\n\n<h1>Vision Results</h1>\n\nExample 1\n<img src=\"https://i.URL width=\"400\"/>\nExample 2\n<img src=\"https://i.URL width=\"400\"/>\n\n* Experiment yielded model that provides shorter, less verbose output for questions about pictures\n* The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user\n* Best suited for captioning use cases that require concise descriptions and low token counts\n* This model lacks the conversational prose of Excalibur-7b-DPO and is much \"drier\" in tone\n\n<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>\n 1. Quantized - Limited VRAM Option (197mb)\n 2. Unquantized - Premium Option / Best Quality (596mb)\n\nSelect the gguf file of your choice in Koboldcpp as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:\n<img src=\"https://i.URL width=\"425\"/>",
"## Prompt Format\nUse Alpaca for best results.",
"## Other info\n- Developed by: InferenceIllusionist\n- License: apache-2.0\n- Finetuned from model : mistral-community/Mistral-7B-v0.2\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #gguf #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #dataset-visheratin/realworldqa #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Mistral-RealworldQA-v0.2-7b SFT\n\n<img src=\"https://i.URL width=\"400\"/>\n\nGGUFs can be found here\n\n\nAn experiment with the goal of reducing hallucinations in VQA\n\nFirst in a series of experiments centering around fine-tuning for image captioning.\n\n<h1>Release Notes</h1>\n\n* v0.1 - Initial Release\n* <b>v0.2</b> (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating\n\n<h2>Background & Methodology</h2>\n\nMistral-7b-02 base model was fine-tuned using the RealWorldQA dataset, originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v\n\n<h1>Vision Results</h1>\n\nExample 1\n<img src=\"https://i.URL width=\"400\"/>\nExample 2\n<img src=\"https://i.URL width=\"400\"/>\n\n* Experiment yielded model that provides shorter, less verbose output for questions about pictures\n* The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user\n* Best suited for captioning use cases that require concise descriptions and low token counts\n* This model lacks the conversational prose of Excalibur-7b-DPO and is much \"drier\" in tone\n\n<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>\n 1. Quantized - Limited VRAM Option (197mb)\n 2. Unquantized - Premium Option / Best Quality (596mb)\n\nSelect the gguf file of your choice in Koboldcpp as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:\n<img src=\"https://i.URL width=\"425\"/>",
"## Prompt Format\nUse Alpaca for best results.",
"## Other info\n- Developed by: InferenceIllusionist\n- License: apache-2.0\n- Finetuned from model : mistral-community/Mistral-7B-v0.2\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers | # nbeerbower/Maidphin-Kunoichi-7B AWQ
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
- Original model: [Maidphin-Kunoichi-7B](https://huggingface.co/nbeerbower/Maidphin-Kunoichi-7B)
## Model Summary
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was merged using the SLERP merge method.
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [nbeerbower/maidphin](https://huggingface.co/nbeerbower/maidphin)
| {"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["mergekit", "merge", "quantized", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml"], "pipeline_tag": "text-generation", "base_model": ["SanjiWatsuki/Kunoichi-DPO-v2-7B", "nbeerbower/maidphin"], "inference": false, "quantized_by": "Suparious"} | solidrust/Maidphin-Kunoichi-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"quantized",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:nbeerbower/maidphin",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:26:15+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #quantized #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-nbeerbower/maidphin #license-cc-by-nc-4.0 #text-generation-inference #region-us
| # nbeerbower/Maidphin-Kunoichi-7B AWQ
- Model creator: nbeerbower
- Original model: Maidphin-Kunoichi-7B
## Model Summary
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SLERP merge method.
The following models were included in the merge:
* SanjiWatsuki/Kunoichi-DPO-v2-7B
* nbeerbower/maidphin
| [
"# nbeerbower/Maidphin-Kunoichi-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: Maidphin-Kunoichi-7B",
"## Model Summary\n\nThis is a merge of pre-trained language models created using mergekit.\n\nThis model was merged using the SLERP merge method.\n\nThe following models were included in the merge:\n* SanjiWatsuki/Kunoichi-DPO-v2-7B\n* nbeerbower/maidphin"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #quantized #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-nbeerbower/maidphin #license-cc-by-nc-4.0 #text-generation-inference #region-us \n",
"# nbeerbower/Maidphin-Kunoichi-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: Maidphin-Kunoichi-7B",
"## Model Summary\n\nThis is a merge of pre-trained language models created using mergekit.\n\nThis model was merged using the SLERP merge method.\n\nThe following models were included in the merge:\n* SanjiWatsuki/Kunoichi-DPO-v2-7B\n* nbeerbower/maidphin"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ahforoughi/taxi-v3-100k", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-v3-100k", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | ahforoughi/taxi-v3-100k | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-18T01:26:40+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Kotokin/Merged-RP-Stew-V2-51B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q2_K.gguf) | Q2_K | 19.1 | |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.IQ3_XS.gguf) | IQ3_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q3_K_S.gguf) | Q3_K_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.IQ3_S.gguf) | IQ3_S | 22.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.IQ3_M.gguf) | IQ3_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q3_K_M.gguf) | Q3_K_M | 24.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q3_K_L.gguf) | Q3_K_L | 27.0 | |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.IQ4_XS.gguf) | IQ4_XS | 27.8 | |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q4_K_S.gguf) | Q4_K_S | 29.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q4_K_M.gguf) | Q4_K_M | 30.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q5_K_S.gguf) | Q5_K_S | 35.3 | |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q5_K_M.gguf) | Q5_K_M | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q6_K.gguf) | Q6_K | 42.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Merged-RP-Stew-V2-51B-GGUF/resolve/main/Merged-RP-Stew-V2-51B.Q8_0.gguf.part2of2) | Q8_0 | 54.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["merge", "roleplay", "exl2", "not-for-all-audiences"], "base_model": "Kotokin/Merged-RP-Stew-V2-51B", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE", "license_name": "yi-34b", "quantized_by": "mradermacher"} | mradermacher/Merged-RP-Stew-V2-51B-GGUF | null | [
"transformers",
"gguf",
"merge",
"roleplay",
"exl2",
"not-for-all-audiences",
"en",
"base_model:Kotokin/Merged-RP-Stew-V2-51B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:26:43+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #merge #roleplay #exl2 #not-for-all-audiences #en #base_model-Kotokin/Merged-RP-Stew-V2-51B #license-other #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #merge #roleplay #exl2 #not-for-all-audiences #en #base_model-Kotokin/Merged-RP-Stew-V2-51B #license-other #endpoints_compatible #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_shp2_dpo1
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3878
- Rewards/chosen: -7.7597
- Rewards/rejected: -8.0105
- Rewards/accuracies: 0.6100
- Rewards/margins: 0.2509
- Logps/rejected: -290.7956
- Logps/chosen: -314.9470
- Logits/rejected: -1.2493
- Logits/chosen: -1.2818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0884 | 2.67 | 100 | 0.9487 | -2.0234 | -2.1729 | 0.5500 | 0.1495 | -232.4193 | -257.5841 | -1.3214 | -1.2894 |
| 0.0009 | 5.33 | 200 | 1.4986 | -7.8348 | -7.9036 | 0.5200 | 0.0687 | -289.7258 | -315.6984 | -1.3177 | -1.3419 |
| 0.0001 | 8.0 | 300 | 1.3323 | -7.1704 | -7.4119 | 0.6100 | 0.2415 | -284.8095 | -309.0548 | -1.2674 | -1.2968 |
| 0.0001 | 10.67 | 400 | 1.3579 | -7.4927 | -7.7408 | 0.6100 | 0.2481 | -288.0981 | -312.2774 | -1.2590 | -1.2900 |
| 0.0001 | 13.33 | 500 | 1.3799 | -7.6344 | -7.8716 | 0.6000 | 0.2372 | -289.4062 | -313.6946 | -1.2541 | -1.2860 |
| 0.0001 | 16.0 | 600 | 1.3885 | -7.7023 | -7.9449 | 0.5900 | 0.2425 | -290.1390 | -314.3737 | -1.2519 | -1.2836 |
| 0.0001 | 18.67 | 700 | 1.3971 | -7.7545 | -7.9878 | 0.6100 | 0.2332 | -290.5677 | -314.8956 | -1.2500 | -1.2826 |
| 0.0001 | 21.33 | 800 | 1.3951 | -7.7604 | -8.0061 | 0.6000 | 0.2458 | -290.7514 | -314.9539 | -1.2490 | -1.2817 |
| 0.0001 | 24.0 | 900 | 1.3904 | -7.7591 | -8.0015 | 0.6100 | 0.2424 | -290.7051 | -314.9411 | -1.2491 | -1.2818 |
| 0.0001 | 26.67 | 1000 | 1.3878 | -7.7597 | -8.0105 | 0.6100 | 0.2509 | -290.7956 | -314.9470 | -1.2493 | -1.2818 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_shp2_dpo1", "results": []}]} | guoyu-zhang/model_shp2_dpo1 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2024-04-18T01:27:20+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
| model\_shp2\_dpo1
=================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3878
* Rewards/chosen: -7.7597
* Rewards/rejected: -8.0105
* Rewards/accuracies: 0.6100
* Rewards/margins: 0.2509
* Logps/rejected: -290.7956
* Logps/chosen: -314.9470
* Logits/rejected: -1.2493
* Logits/chosen: -1.2818
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
sentence-similarity | sentence-transformers |
# mteb-pt/average_pt_nilc_fasttext_skip_s1000
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model.
The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc).
This model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mteb-pt/average_pt_nilc_fasttext_skip_s1000')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(929606, 1000)
)
(1): Pooling({'word_embedding_dimension': 1000, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
```bibtex
@inproceedings{hartmann2017portuguese,
title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks},
author = {Hartmann, Nathan S and
Fonseca, Erick R and
Shulby, Christopher D and
Treviso, Marcos V and
Rodrigues, J{'{e}}ssica S and
Alu{'{\i}}sio, Sandra Maria},
year = {2017},
publisher = {SBC},
booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL},
url = {https://sol.sbc.org.br/index.php/stil/article/view/4008}
}
``` | {"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | pt-mteb/average_pt_nilc_fasttext_skip_s1000 | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"pt",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:29:02+00:00 | [] | [
"pt"
] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
|
# mteb-pt/average_pt_nilc_fasttext_skip_s1000
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model.
The original pre-trained word embeddings can be found at: URL
This model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard
## Full Model Architecture
## Citing & Authors
| [
"# mteb-pt/average_pt_nilc_fasttext_skip_s1000\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n",
"# mteb-pt/average_pt_nilc_fasttext_skip_s1000\n\nThis is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
* [l3utterfly/tinyllama-1.1b-layla-v4](https://huggingface.co/l3utterfly/tinyllama-1.1b-layla-v4)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: vihangd/DopeyTinyLlama-1.1B-v1
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: l3utterfly/tinyllama-1.1b-layla-v4
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: appvoid/palmer-003
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
parameters:
normalize: true
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["appvoid/palmer-003", "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "l3utterfly/tinyllama-1.1b-layla-v4", "vihangd/DopeyTinyLlama-1.1B-v1"]} | appvoid/palmer-instruct-test-17 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:appvoid/palmer-003",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:l3utterfly/tinyllama-1.1b-layla-v4",
"base_model:vihangd/DopeyTinyLlama-1.1B-v1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:29:28+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-appvoid/palmer-003 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T as a base.
### Models Merged
The following models were included in the merge:
* appvoid/palmer-003
* l3utterfly/tinyllama-1.1b-layla-v4
* vihangd/DopeyTinyLlama-1.1B-v1
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* appvoid/palmer-003\n* l3utterfly/tinyllama-1.1b-layla-v4\n* vihangd/DopeyTinyLlama-1.1B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-appvoid/palmer-003 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* appvoid/palmer-003\n* l3utterfly/tinyllama-1.1b-layla-v4\n* vihangd/DopeyTinyLlama-1.1B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "unsloth/mistral-7b-bnb-4bit"} | HongxuanLi/test_chatbot | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-bnb-4bit",
"region:us"
] | null | 2024-04-18T01:31:42+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-unsloth/mistral-7b-bnb-4bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-unsloth/mistral-7b-bnb-4bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
* [l3utterfly/tinyllama-1.1b-layla-v4](https://huggingface.co/l3utterfly/tinyllama-1.1b-layla-v4)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: l3utterfly/tinyllama-1.1b-layla-v4
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: vihangd/DopeyTinyLlama-1.1B-v1
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: appvoid/palmer-003
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
parameters:
normalize: true
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["appvoid/palmer-003", "l3utterfly/tinyllama-1.1b-layla-v4", "vihangd/DopeyTinyLlama-1.1B-v1", "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T"]} | appvoid/palmer-instruct-test-18 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:appvoid/palmer-003",
"base_model:l3utterfly/tinyllama-1.1b-layla-v4",
"base_model:vihangd/DopeyTinyLlama-1.1B-v1",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:33:48+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-appvoid/palmer-003 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T as a base.
### Models Merged
The following models were included in the merge:
* appvoid/palmer-003
* l3utterfly/tinyllama-1.1b-layla-v4
* vihangd/DopeyTinyLlama-1.1B-v1
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* appvoid/palmer-003\n* l3utterfly/tinyllama-1.1b-layla-v4\n* vihangd/DopeyTinyLlama-1.1B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-appvoid/palmer-003 #base_model-l3utterfly/tinyllama-1.1b-layla-v4 #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* appvoid/palmer-003\n* l3utterfly/tinyllama-1.1b-layla-v4\n* vihangd/DopeyTinyLlama-1.1B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | mjyoo2/kullm_arc_lora_ft | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:34:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
sentence-similarity | sentence-transformers |
# mteb-pt/average_pt_nilc_glove_s1000
This is an adaptation of pre-trained Portuguese GloVe Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model.
The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc).
This model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mteb-pt/average_pt_nilc_glove_s1000')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(929606, 1000)
)
(1): Pooling({'word_embedding_dimension': 1000, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
```bibtex
@inproceedings{hartmann2017portuguese,
title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks},
author = {Hartmann, Nathan S and
Fonseca, Erick R and
Shulby, Christopher D and
Treviso, Marcos V and
Rodrigues, J{'{e}}ssica S and
Alu{'{\i}}sio, Sandra Maria},
year = {2017},
publisher = {SBC},
booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL},
url = {https://sol.sbc.org.br/index.php/stil/article/view/4008}
}
``` | {"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | pt-mteb/average_pt_nilc_glove_s1000 | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"pt",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:35:58+00:00 | [] | [
"pt"
] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
|
# mteb-pt/average_pt_nilc_glove_s1000
This is an adaptation of pre-trained Portuguese GloVe Word Embeddings to a sentence-transformers model.
The original pre-trained word embeddings can be found at: URL
This model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard
## Full Model Architecture
## Citing & Authors
| [
"# mteb-pt/average_pt_nilc_glove_s1000\n\nThis is an adaptation of pre-trained Portuguese GloVe Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n",
"# mteb-pt/average_pt_nilc_glove_s1000\n\nThis is an adaptation of pre-trained Portuguese GloVe Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kreas/Mistral-7B-v0.1-GPTQ-3bit | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"region:us"
] | null | 2024-04-18T01:37:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2_esnli_5000_2ep
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
| {"tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "Mistral-7B-Instruct-v0.2_esnli_5000_2ep", "results": []}]} | mohsenfayyaz/Mistral-7B-Instruct-v0.2_esnli_5000_2ep | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:40:50+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mistral-7B-Instruct-v0.2_esnli_5000_2ep
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
| [
"# Mistral-7B-Instruct-v0.2_esnli_5000_2ep\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mistral-7B-Instruct-v0.2_esnli_5000_2ep\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotions_flan_tf
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4970
- F1 Micro: 0.6980
- F1 Macro: 0.6126
- Accuracy: 0.2188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|
| 0.8313 | 0.21 | 20 | 0.7916 | 0.4215 | 0.1586 | 0.0123 |
| 0.7813 | 0.41 | 40 | 0.7840 | 0.4717 | 0.2286 | 0.0201 |
| 0.7702 | 0.62 | 60 | 0.7485 | 0.4935 | 0.2379 | 0.0971 |
| 0.7218 | 0.83 | 80 | 0.6327 | 0.6045 | 0.3866 | 0.1256 |
| 0.6401 | 1.03 | 100 | 0.5907 | 0.6269 | 0.4310 | 0.1586 |
| 0.5951 | 1.24 | 120 | 0.5668 | 0.6459 | 0.4981 | 0.1502 |
| 0.5686 | 1.45 | 140 | 0.5458 | 0.6593 | 0.5372 | 0.1683 |
| 0.5576 | 1.65 | 160 | 0.5332 | 0.6675 | 0.5403 | 0.1722 |
| 0.5465 | 1.86 | 180 | 0.5224 | 0.6734 | 0.5667 | 0.1812 |
| 0.5436 | 2.07 | 200 | 0.5164 | 0.6807 | 0.5751 | 0.1877 |
| 0.5297 | 2.27 | 220 | 0.5149 | 0.6742 | 0.5793 | 0.1741 |
| 0.5109 | 2.48 | 240 | 0.5049 | 0.6845 | 0.5824 | 0.1929 |
| 0.5265 | 2.69 | 260 | 0.5070 | 0.6846 | 0.5859 | 0.1799 |
| 0.5028 | 2.89 | 280 | 0.5068 | 0.6847 | 0.5870 | 0.1864 |
| 0.5097 | 3.1 | 300 | 0.5025 | 0.6892 | 0.5940 | 0.2084 |
| 0.4971 | 3.31 | 320 | 0.5032 | 0.6843 | 0.5995 | 0.1890 |
| 0.4762 | 3.51 | 340 | 0.5069 | 0.6955 | 0.5928 | 0.2129 |
| 0.4811 | 3.72 | 360 | 0.4954 | 0.6898 | 0.5996 | 0.2026 |
| 0.5065 | 3.93 | 380 | 0.4961 | 0.6918 | 0.6038 | 0.1838 |
| 0.4746 | 4.13 | 400 | 0.4992 | 0.6956 | 0.6009 | 0.2142 |
| 0.4786 | 4.34 | 420 | 0.5013 | 0.6918 | 0.6018 | 0.2026 |
| 0.4832 | 4.55 | 440 | 0.4935 | 0.6904 | 0.6031 | 0.2155 |
| 0.465 | 4.75 | 460 | 0.4984 | 0.6938 | 0.6027 | 0.2071 |
| 0.4683 | 4.96 | 480 | 0.4977 | 0.6960 | 0.6011 | 0.2091 |
| 0.4573 | 5.17 | 500 | 0.4985 | 0.6915 | 0.6076 | 0.2006 |
| 0.4619 | 5.37 | 520 | 0.4952 | 0.6945 | 0.6044 | 0.2129 |
| 0.4535 | 5.58 | 540 | 0.4983 | 0.6927 | 0.6024 | 0.2078 |
| 0.4475 | 5.79 | 560 | 0.4967 | 0.6970 | 0.6064 | 0.2194 |
| 0.454 | 5.99 | 580 | 0.5027 | 0.6941 | 0.6090 | 0.1994 |
| 0.4479 | 6.2 | 600 | 0.4940 | 0.6919 | 0.6041 | 0.2117 |
| 0.4304 | 6.41 | 620 | 0.5002 | 0.6982 | 0.6114 | 0.2006 |
| 0.445 | 6.61 | 640 | 0.4970 | 0.6951 | 0.6098 | 0.2071 |
| 0.4434 | 6.82 | 660 | 0.4964 | 0.6976 | 0.6075 | 0.2136 |
| 0.4543 | 7.03 | 680 | 0.4904 | 0.6936 | 0.6086 | 0.2013 |
| 0.4474 | 7.24 | 700 | 0.4969 | 0.6960 | 0.6108 | 0.2071 |
| 0.4325 | 7.44 | 720 | 0.4998 | 0.7013 | 0.6123 | 0.2123 |
| 0.4362 | 7.65 | 740 | 0.4947 | 0.6953 | 0.6101 | 0.2091 |
| 0.4276 | 7.86 | 760 | 0.4978 | 0.6955 | 0.6119 | 0.2052 |
| 0.4392 | 8.06 | 780 | 0.4944 | 0.6967 | 0.6078 | 0.2104 |
| 0.4167 | 8.27 | 800 | 0.4987 | 0.6966 | 0.6080 | 0.2097 |
| 0.4309 | 8.48 | 820 | 0.4970 | 0.6980 | 0.6126 | 0.2188 |
| 0.42 | 8.68 | 840 | 0.4999 | 0.6977 | 0.6105 | 0.2129 |
| 0.423 | 8.89 | 860 | 0.5003 | 0.6975 | 0.6087 | 0.2142 |
| 0.4382 | 9.1 | 880 | 0.4977 | 0.6975 | 0.6115 | 0.2136 |
| 0.4182 | 9.3 | 900 | 0.4976 | 0.6981 | 0.6123 | 0.2155 |
| 0.4153 | 9.51 | 920 | 0.5000 | 0.6978 | 0.6108 | 0.2175 |
| 0.4277 | 9.72 | 940 | 0.5003 | 0.6982 | 0.6092 | 0.2168 |
| 0.4246 | 9.92 | 960 | 0.5000 | 0.6976 | 0.6093 | 0.2168 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/flan-t5-base", "model-index": [{"name": "emotions_flan_tf", "results": []}]} | yunaseo/emotions_flan_tf | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T01:41:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #region-us
| emotions\_flan\_tf
==================
This model is a fine-tuned version of google/flan-t5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4970
* F1 Micro: 0.6980
* F1 Macro: 0.6126
* Accuracy: 0.2188
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# ver_4.1_sft
This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) on the Custom dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 0
### Training results
| Training Loss | Epoch | Step |
|:-------------:|:-----:|:----:|
| 0.7739 | 0.2 | 1124 |
| 0.7214 | 0.4 | 2248 |
| 0.6832 | 0.6 | 3372 |
| 0.6935 | 0.8 | 4496 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"language": ["en", "ko"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference"], "pipeline_tag": "text-generation"} | spow12/EEVE_ver_4.1_sft | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"conversational",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:44:06+00:00 | [] | [
"en",
"ko"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #conversational #en #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ver\_4.1\_sft
=============
This model is a fine-tuned version of yanolja/EEVE-Korean-10.8B-v1.0 on the Custom dataset.
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 8
* total\_train\_batch\_size: 8
* total\_eval\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 0
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.0.1
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 8\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 0",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #conversational #en #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 8\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 0",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null | GGUFs for Jersey Devil 14b - https://huggingface.co/MarsupialAI/JerseyDevil-14b
iMatrix GGUFs generated with Kalomaze's semi-random groups_merged.txt | {} | MarsupialAI/JerseyDevil-14b_iMatrix_GGUF | null | [
"gguf",
"region:us"
] | null | 2024-04-18T01:48:42+00:00 | [] | [] | TAGS
#gguf #region-us
| GGUFs for Jersey Devil 14b - URL
iMatrix GGUFs generated with Kalomaze's semi-random groups_merged.txt | [] | [
"TAGS\n#gguf #region-us \n"
] |
sentence-similarity | sentence-transformers |
# mteb-pt/average_pt_nilc_wang2vec_skip_s1000
This is an adaptation of pre-trained Portuguese Wang2Vec Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model.
The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc).
This model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mteb-pt/average_pt_nilc_wang2vec_skip_s1000')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(929607, 1000)
)
(1): Pooling({'word_embedding_dimension': 1000, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
```bibtex
@inproceedings{hartmann2017portuguese,
title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks},
author = {Hartmann, Nathan S and
Fonseca, Erick R and
Shulby, Christopher D and
Treviso, Marcos V and
Rodrigues, J{'{e}}ssica S and
Alu{'{\i}}sio, Sandra Maria},
year = {2017},
publisher = {SBC},
booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL},
url = {https://sol.sbc.org.br/index.php/stil/article/view/4008}
}
``` | {"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | pt-mteb/average_pt_nilc_wang2vec_skip_s1000 | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"pt",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:50:22+00:00 | [] | [
"pt"
] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
|
# mteb-pt/average_pt_nilc_wang2vec_skip_s1000
This is an adaptation of pre-trained Portuguese Wang2Vec Word Embeddings to a sentence-transformers model.
The original pre-trained word embeddings can be found at: URL
This model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard
## Full Model Architecture
## Citing & Authors
| [
"# mteb-pt/average_pt_nilc_wang2vec_skip_s1000\n\nThis is an adaptation of pre-trained Portuguese Wang2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n",
"# mteb-pt/average_pt_nilc_wang2vec_skip_s1000\n\nThis is an adaptation of pre-trained Portuguese Wang2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | null |
# DavidAU/LDCC-SOLAR-10.7B-Q8_0-GGUF
This model was converted to GGUF format from [`LDCC/LDCC-SOLAR-10.7B`](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/LDCC-SOLAR-10.7B-Q8_0-GGUF --model ldcc-solar-10.7b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/LDCC-SOLAR-10.7B-Q8_0-GGUF --model ldcc-solar-10.7b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m ldcc-solar-10.7b.Q8_0.gguf -n 128
```
| {"language": ["ko"], "license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/LDCC-SOLAR-10.7B-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"ko",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-18T01:51:13+00:00 | [] | [
"ko"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #ko #license-cc-by-nc-4.0 #region-us
|
# DavidAU/LDCC-SOLAR-10.7B-Q8_0-GGUF
This model was converted to GGUF format from 'LDCC/LDCC-SOLAR-10.7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/LDCC-SOLAR-10.7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'LDCC/LDCC-SOLAR-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #ko #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/LDCC-SOLAR-10.7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'LDCC/LDCC-SOLAR-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers | # Mistral-RealworldQA-v0.2-7b SFT GGUF
<img src="https://i.imgur.com/Pf53ms5.jpeg" width="400"/>
An experiment with the goal of reducing hallucinations in [VQA](https://huggingface.co/tasks/visual-question-answering)
First in a series of experiments centering around fine-tuning for image captioning.
<h1>Release Notes</h1>
* v0.1 - Initial Release
* <b>v0.2</b> (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating
<h2>Background & Methodology</h2>
Mistral-7b-02 base model was fine-tuned using the [RealWorldQA dataset](https://huggingface.co/datasets/visheratin/realworldqa), originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v
<h1>Vision Results</h1>
Example 1
<img src="https://i.imgur.com/E9mS4Xb.jpeg" width="400"/>
Example 2
<img src="https://i.imgur.com/SmTz1Yd.jpeg" width="400"/>
* Experiment yielded model that provides shorter, less verbose output for questions about pictures
* The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user
* Best suited for captioning use cases that require concise descriptions and low token counts
* This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
1. [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
2. [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT/resolve/main/mmproj-model-f16.gguf?download=true)
Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
<img src="https://i.imgur.com/x8vqH29.png" width="425"/>
## Prompt Format
Use Alpaca for best results.
## Other info
- **Developed by:** InferenceIllusionist
- **License:** apache-2.0
- **Finetuned from model :** mistral-community/Mistral-7B-v0.2
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft", "gguf"], "datasets": ["visheratin/realworldqa"], "base_model": "unsloth/mistral-7b-v0.2-bnb-4bit"} | InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT-GGUF | null | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"dataset:visheratin/realworldqa",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:51:13+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mistral #text-generation-inference #unsloth #trl #sft #en #dataset-visheratin/realworldqa #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
| # Mistral-RealworldQA-v0.2-7b SFT GGUF
<img src="https://i.URL width="400"/>
An experiment with the goal of reducing hallucinations in VQA
First in a series of experiments centering around fine-tuning for image captioning.
<h1>Release Notes</h1>
* v0.1 - Initial Release
* <b>v0.2</b> (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating
<h2>Background & Methodology</h2>
Mistral-7b-02 base model was fine-tuned using the RealWorldQA dataset, originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v
<h1>Vision Results</h1>
Example 1
<img src="https://i.URL width="400"/>
Example 2
<img src="https://i.URL width="400"/>
* Experiment yielded model that provides shorter, less verbose output for questions about pictures
* The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user
* Best suited for captioning use cases that require concise descriptions and low token counts
* This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
1. Quantized - Limited VRAM Option (197mb)
2. Unquantized - Premium Option / Best Quality (596mb)
Select the gguf file of your choice in Koboldcpp as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
<img src="https://i.URL width="425"/>
## Prompt Format
Use Alpaca for best results.
## Other info
- Developed by: InferenceIllusionist
- License: apache-2.0
- Finetuned from model : mistral-community/Mistral-7B-v0.2
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# Mistral-RealworldQA-v0.2-7b SFT GGUF\n\n<img src=\"https://i.URL width=\"400\"/>\n\n\nAn experiment with the goal of reducing hallucinations in VQA\n\nFirst in a series of experiments centering around fine-tuning for image captioning.\n\n<h1>Release Notes</h1>\n\n* v0.1 - Initial Release\n* <b>v0.2</b> (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating\n\n<h2>Background & Methodology</h2>\n\nMistral-7b-02 base model was fine-tuned using the RealWorldQA dataset, originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v\n\n<h1>Vision Results</h1>\n\nExample 1\n<img src=\"https://i.URL width=\"400\"/>\nExample 2\n<img src=\"https://i.URL width=\"400\"/>\n\n* Experiment yielded model that provides shorter, less verbose output for questions about pictures\n* The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user\n* Best suited for captioning use cases that require concise descriptions and low token counts\n* This model lacks the conversational prose of Excalibur-7b-DPO and is much \"drier\" in tone\n\n<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>\n 1. Quantized - Limited VRAM Option (197mb)\n 2. Unquantized - Premium Option / Best Quality (596mb)\n\nSelect the gguf file of your choice in Koboldcpp as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:\n<img src=\"https://i.URL width=\"425\"/>",
"## Prompt Format\nUse Alpaca for best results.",
"## Other info\n- Developed by: InferenceIllusionist\n- License: apache-2.0\n- Finetuned from model : mistral-community/Mistral-7B-v0.2\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #trl #sft #en #dataset-visheratin/realworldqa #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Mistral-RealworldQA-v0.2-7b SFT GGUF\n\n<img src=\"https://i.URL width=\"400\"/>\n\n\nAn experiment with the goal of reducing hallucinations in VQA\n\nFirst in a series of experiments centering around fine-tuning for image captioning.\n\n<h1>Release Notes</h1>\n\n* v0.1 - Initial Release\n* <b>v0.2</b> (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating\n\n<h2>Background & Methodology</h2>\n\nMistral-7b-02 base model was fine-tuned using the RealWorldQA dataset, originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v\n\n<h1>Vision Results</h1>\n\nExample 1\n<img src=\"https://i.URL width=\"400\"/>\nExample 2\n<img src=\"https://i.URL width=\"400\"/>\n\n* Experiment yielded model that provides shorter, less verbose output for questions about pictures\n* The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user\n* Best suited for captioning use cases that require concise descriptions and low token counts\n* This model lacks the conversational prose of Excalibur-7b-DPO and is much \"drier\" in tone\n\n<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>\n 1. Quantized - Limited VRAM Option (197mb)\n 2. Unquantized - Premium Option / Best Quality (596mb)\n\nSelect the gguf file of your choice in Koboldcpp as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:\n<img src=\"https://i.URL width=\"425\"/>",
"## Prompt Format\nUse Alpaca for best results.",
"## Other info\n- Developed by: InferenceIllusionist\n- License: apache-2.0\n- Finetuned from model : mistral-community/Mistral-7B-v0.2\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | null |
# DavidAU/Sensualize-Solar-10.7B-Q8_0-GGUF
This model was converted to GGUF format from [`Sao10K/Sensualize-Solar-10.7B`](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Sensualize-Solar-10.7B-Q8_0-GGUF --model sensualize-solar-10.7b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Sensualize-Solar-10.7B-Q8_0-GGUF --model sensualize-solar-10.7b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m sensualize-solar-10.7b.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"], "base_model": ["upstage/SOLAR-10.7B-v1.0"]} | DavidAU/Sensualize-Solar-10.7B-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:upstage/SOLAR-10.7B-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-18T01:52:49+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #base_model-upstage/SOLAR-10.7B-v1.0 #license-cc-by-nc-4.0 #region-us
|
# DavidAU/Sensualize-Solar-10.7B-Q8_0-GGUF
This model was converted to GGUF format from 'Sao10K/Sensualize-Solar-10.7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Sensualize-Solar-10.7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Sao10K/Sensualize-Solar-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #base_model-upstage/SOLAR-10.7B-v1.0 #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/Sensualize-Solar-10.7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Sao10K/Sensualize-Solar-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers | This model is a fine-tuning of Llama 7B LLM.
This model, known as Llama 7B LLM, is a remarkable achievement in the field of natural language processing. Developed as a successor to its predecessor, Llama 6B LLM, this fine-tuned version exhibits enhanced capabilities and improved performance. Let's delve into the advancements and features of Llama 7B LLM.
One of the key areas of improvement in this model lies in its ability to understand and generate nuanced language. Through extensive training on a diverse range of textual data, Llama 7B LLM has acquired a deeper understanding of context, semantics, and syntax. It can now generate more coherent and contextually relevant responses, making it an even more valuable tool for various applications.---
license: apache-2.0
language:
- en
---
---
license: apache-2.0
--- | {"license": "apache-2.0"} | naivecat/cherry_5_7B | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:52:51+00:00 | [] | [] | TAGS
#transformers #pytorch #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This model is a fine-tuning of Llama 7B LLM.
This model, known as Llama 7B LLM, is a remarkable achievement in the field of natural language processing. Developed as a successor to its predecessor, Llama 6B LLM, this fine-tuned version exhibits enhanced capabilities and improved performance. Let's delve into the advancements and features of Llama 7B LLM.
One of the key areas of improvement in this model lies in its ability to understand and generate nuanced language. Through extensive training on a diverse range of textual data, Llama 7B LLM has acquired a deeper understanding of context, semantics, and syntax. It can now generate more coherent and contextually relevant responses, making it an even more valuable tool for various applications.---
license: apache-2.0
language:
- en
---
---
license: apache-2.0
--- | [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null | TRIGGERED WORD : CumOnFAceQuiron Style | {} | keanurefresh/239803 | null | [
"region:us"
] | null | 2024-04-18T01:53:10+00:00 | [] | [] | TAGS
#region-us
| TRIGGERED WORD : CumOnFAceQuiron Style | [] | [
"TAGS\n#region-us \n"
] |
text-generation | null |
## Model Summary
Phi-mmlu-lora is a LORA model which fine-tuned on gsm8k dataset. The base model is [microsoft/phi-2](https://huggingface.co/microsoft/phi-2).
## How to Use
```python
import torch
from transformers import AutoTokenizer
from peft import AutoPeftModelForCausalLM
torch.set_default_device("cuda")
model = AutoPeftModelForCausalLM.from_pretrained("liuchanghf/phi2-mmlu-lora")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
``` | {"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "license_link": "https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE", "pipeline_tag": "text-generation"} | liuchanghf/phi2-gsm8k-lora | null | [
"safetensors",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
] | null | 2024-04-18T01:53:25+00:00 | [] | [
"en"
] | TAGS
#safetensors #nlp #code #text-generation #en #license-mit #region-us
|
## Model Summary
Phi-mmlu-lora is a LORA model which fine-tuned on gsm8k dataset. The base model is microsoft/phi-2.
## How to Use
| [
"## Model Summary\n\nPhi-mmlu-lora is a LORA model which fine-tuned on gsm8k dataset. The base model is microsoft/phi-2.",
"## How to Use"
] | [
"TAGS\n#safetensors #nlp #code #text-generation #en #license-mit #region-us \n",
"## Model Summary\n\nPhi-mmlu-lora is a LORA model which fine-tuned on gsm8k dataset. The base model is microsoft/phi-2.",
"## How to Use"
] |
null | transformers |
# DavidAU/nox-solar-10.7b-v4-Q8_0-GGUF
This model was converted to GGUF format from [`davidkim205/nox-solar-10.7b-v4`](https://huggingface.co/davidkim205/nox-solar-10.7b-v4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/davidkim205/nox-solar-10.7b-v4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/nox-solar-10.7b-v4-Q8_0-GGUF --model nox-solar-10.7b-v4.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/nox-solar-10.7b-v4-Q8_0-GGUF --model nox-solar-10.7b-v4.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nox-solar-10.7b-v4.Q8_0.gguf -n 128
```
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/nox-solar-10.7b-v4-Q8_0-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:54:03+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/nox-solar-10.7b-v4-Q8_0-GGUF
This model was converted to GGUF format from 'davidkim205/nox-solar-10.7b-v4' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/nox-solar-10.7b-v4-Q8_0-GGUF\nThis model was converted to GGUF format from 'davidkim205/nox-solar-10.7b-v4' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/nox-solar-10.7b-v4-Q8_0-GGUF\nThis model was converted to GGUF format from 'davidkim205/nox-solar-10.7b-v4' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Priyanshu0007/AquaLlama-2-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AquaLlama-2-7b-GGUF/resolve/main/AquaLlama-2-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "Priyanshu0007/AquaLlama-2-7b", "quantized_by": "mradermacher"} | mradermacher/AquaLlama-2-7b-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:Priyanshu0007/AquaLlama-2-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:54:11+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-Priyanshu0007/AquaLlama-2-7b #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-Priyanshu0007/AquaLlama-2-7b #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kreas/Mistral-7B-v0.1-GPTQ-4bit | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-18T01:54:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2_medical_bios_5000_1ep
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
| {"tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "Mistral-7B-Instruct-v0.2_medical_bios_5000_1ep", "results": []}]} | mohsenfayyaz/Mistral-7B-Instruct-v0.2_medical_bios_5000_1ep | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:55:02+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mistral-7B-Instruct-v0.2_medical_bios_5000_1ep
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
| [
"# Mistral-7B-Instruct-v0.2_medical_bios_5000_1ep\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.5e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mistral-7B-Instruct-v0.2_medical_bios_5000_1ep\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.5e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # nbeerbower/flammen17-py-DPO-v1-7B AWQ
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
- Original model: [flammen17-py-DPO-v1-7B](https://huggingface.co/nbeerbower/flammen17-py-DPO-v1-7B)

## Model Summary
A Mistral 7B LLM built from merging pretrained models and finetuning on [Jon Durbin](https://huggingface.co/jondurbin)'s [py-dpo-v0.1](https://huggingface.co/datasets/jondurbin/py-dpo-v0.1).
Finetuned using an A100 on Google Colab. 🙏
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "experimental"], "datasets": ["jondurbin/py-dpo-v0.1"], "base_model": ["nbeerbower/flammen17-mistral-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Flammen-Trismegistus-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"experimental",
"dataset:jondurbin/py-dpo-v0.1",
"base_model:nbeerbower/flammen17-mistral-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:56:07+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #experimental #dataset-jondurbin/py-dpo-v0.1 #base_model-nbeerbower/flammen17-mistral-7B #license-apache-2.0 #text-generation-inference #region-us
| # nbeerbower/flammen17-py-DPO-v1-7B AWQ
- Model creator: nbeerbower
- Original model: flammen17-py-DPO-v1-7B
!image/png
## Model Summary
A Mistral 7B LLM built from merging pretrained models and finetuning on Jon Durbin's py-dpo-v0.1.
Finetuned using an A100 on Google Colab.
Fine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne
| [
"# nbeerbower/flammen17-py-DPO-v1-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: flammen17-py-DPO-v1-7B\n\n!image/png",
"## Model Summary\n\nA Mistral 7B LLM built from merging pretrained models and finetuning on Jon Durbin's py-dpo-v0.1.\n\nFinetuned using an A100 on Google Colab. \n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #experimental #dataset-jondurbin/py-dpo-v0.1 #base_model-nbeerbower/flammen17-mistral-7B #license-apache-2.0 #text-generation-inference #region-us \n",
"# nbeerbower/flammen17-py-DPO-v1-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: flammen17-py-DPO-v1-7B\n\n!image/png",
"## Model Summary\n\nA Mistral 7B LLM built from merging pretrained models and finetuning on Jon Durbin's py-dpo-v0.1.\n\nFinetuned using an A100 on Google Colab. \n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne"
] |
text-generation | transformers | # R136a1/InfinityKuno-2x7B AWQ
- Model creator: [R136a1](https://huggingface.co/R136a1)
- Original model: [InfinityKuno-2x7B](https://huggingface.co/R136a1/InfinityKuno-2x7B)

## Model Sumamry
Experimental model from [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) models. Merged to MoE model with 2x7B parameters.
### Prompt format:
Alpaca, Extended Alpaca, Roleplay-Alpaca. (Use any Alpaca based prompt formatting and you should be fine.)
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "safetensors", "mixtral", "not-for-all-audiences", "nsfw"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious", "model-index": [{"name": "InfinityKuno-2x7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 69.62, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 87.44, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.49, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 63.28}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 82.72, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.34, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B", "name": "Open LLM Leaderboard"}}]}]} | solidrust/InfinityKuno-2x7B-AWQ | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"not-for-all-audiences",
"nsfw",
"en",
"license:apache-2.0",
"model-index",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:56:31+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #not-for-all-audiences #nsfw #en #license-apache-2.0 #model-index #text-generation-inference #region-us
| # R136a1/InfinityKuno-2x7B AWQ
- Model creator: R136a1
- Original model: InfinityKuno-2x7B
!InfinityKuno-2x7B
## Model Sumamry
Experimental model from Endevor/InfinityRP-v1-7B and SanjiWatsuki/Kunoichi-DPO-v2-7B models. Merged to MoE model with 2x7B parameters.
### Prompt format:
Alpaca, Extended Alpaca, Roleplay-Alpaca. (Use any Alpaca based prompt formatting and you should be fine.)
| [
"# R136a1/InfinityKuno-2x7B AWQ\n\n- Model creator: R136a1\n- Original model: InfinityKuno-2x7B\n\n!InfinityKuno-2x7B",
"## Model Sumamry\n\nExperimental model from Endevor/InfinityRP-v1-7B and SanjiWatsuki/Kunoichi-DPO-v2-7B models. Merged to MoE model with 2x7B parameters.",
"### Prompt format: \nAlpaca, Extended Alpaca, Roleplay-Alpaca. (Use any Alpaca based prompt formatting and you should be fine.)"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #not-for-all-audiences #nsfw #en #license-apache-2.0 #model-index #text-generation-inference #region-us \n",
"# R136a1/InfinityKuno-2x7B AWQ\n\n- Model creator: R136a1\n- Original model: InfinityKuno-2x7B\n\n!InfinityKuno-2x7B",
"## Model Sumamry\n\nExperimental model from Endevor/InfinityRP-v1-7B and SanjiWatsuki/Kunoichi-DPO-v2-7B models. Merged to MoE model with 2x7B parameters.",
"### Prompt format: \nAlpaca, Extended Alpaca, Roleplay-Alpaca. (Use any Alpaca based prompt formatting and you should be fine.)"
] |
sentence-similarity | sentence-transformers |
# mteb-pt/average_pt_nilc_word2vec_skip_s1000
This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model.
The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc).
This model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mteb-pt/average_pt_nilc_word2vec_skip_s1000')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(929607, 1000)
)
(1): Pooling({'word_embedding_dimension': 1000, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
```bibtex
@inproceedings{hartmann2017portuguese,
title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks},
author = {Hartmann, Nathan S and
Fonseca, Erick R and
Shulby, Christopher D and
Treviso, Marcos V and
Rodrigues, J{'{e}}ssica S and
Alu{'{\i}}sio, Sandra Maria},
year = {2017},
publisher = {SBC},
booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL},
url = {https://sol.sbc.org.br/index.php/stil/article/view/4008}
}
``` | {"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | pt-mteb/average_pt_nilc_word2vec_skip_s1000 | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"pt",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:56:53+00:00 | [] | [
"pt"
] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
|
# mteb-pt/average_pt_nilc_word2vec_skip_s1000
This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model.
The original pre-trained word embeddings can be found at: URL
This model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard
## Full Model Architecture
## Citing & Authors
| [
"# mteb-pt/average_pt_nilc_word2vec_skip_s1000\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n",
"# mteb-pt/average_pt_nilc_word2vec_skip_s1000\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 1000 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation | transformers | # mistralai/Mistral-7B-Instruct-v0.2 AWQ
- Model creator: [mistralai](https://huggingface.co/mistralai)
- Original model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## Model Summary
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method.
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "finetuned"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Mistral-7B-Instruct-v0.2-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"finetuned",
"conversational",
"arxiv:2310.06825",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T01:56:55+00:00 | [
"2310.06825"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #finetuned #conversational #arxiv-2310.06825 #license-apache-2.0 #text-generation-inference #region-us
| # mistralai/Mistral-7B-Instruct-v0.2 AWQ
- Model creator: mistralai
- Original model: Mistral-7B-Instruct-v0.2
## Model Summary
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
For full details of this model please read our paper and release blog post.
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
This format is available as a chat template via the 'apply_chat_template()' method.
| [
"# mistralai/Mistral-7B-Instruct-v0.2 AWQ\n\n- Model creator: mistralai\n- Original model: Mistral-7B-Instruct-v0.2",
"## Model Summary\n\nThe Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.\n\nMistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1\n- 32k context window (vs 8k context in v0.1)\n- Rope-theta = 1e6\n- No Sliding-Window Attention\n\nFor full details of this model please read our paper and release blog post.",
"## Instruction format\n\nIn order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.\n\nE.g.\n\n\nThis format is available as a chat template via the 'apply_chat_template()' method."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #finetuned #conversational #arxiv-2310.06825 #license-apache-2.0 #text-generation-inference #region-us \n",
"# mistralai/Mistral-7B-Instruct-v0.2 AWQ\n\n- Model creator: mistralai\n- Original model: Mistral-7B-Instruct-v0.2",
"## Model Summary\n\nThe Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.\n\nMistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1\n- 32k context window (vs 8k context in v0.1)\n- Rope-theta = 1e6\n- No Sliding-Window Attention\n\nFor full details of this model please read our paper and release blog post.",
"## Instruction format\n\nIn order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.\n\nE.g.\n\n\nThis format is available as a chat template via the 'apply_chat_template()' method."
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small GA-EN Speech Translation
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia as well as a copy of the dataset with noise reduction and normalization (for both train and test) dataset.
The datasets were processed with noise reduction and normalization (both the train and test splits).
It achieves the following results on the evaluation set:
- Loss: 1.3339
- Bleu: 30.66
- Chrf: 46.99
- Wer: 65.4660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.01
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:-----:|:-----:|:---------------:|:--------:|
| 1.41 | 0.07 | 100 | 9.78 | 25.23 | 1.8782 | 96.3980 |
| 1.2436 | 0.13 | 200 | 10.23 | 28.66 | 1.8301 | 125.9343 |
| 1.593 | 0.2 | 300 | 9.53 | 30.7 | 1.7066 | 137.1454 |
| 1.9589 | 0.26 | 400 | 12.08 | 32.94 | 1.5629 | 109.3652 |
| 1.8174 | 0.33 | 500 | 13.73 | 34.5 | 1.5154 | 123.5930 |
| 1.6775 | 0.39 | 600 | 15.8 | 35.68 | 1.5220 | 102.2062 |
| 1.7074 | 0.46 | 700 | 16.62 | 37.96 | 1.4570 | 100.5853 |
| 1.5793 | 0.53 | 800 | 24.5 | 39.91 | 1.4265 | 71.3643 |
| 1.3708 | 0.59 | 900 | 24.35 | 42.26 | 1.3845 | 73.7956 |
| 1.3217 | 0.66 | 1000 | 19.34 | 41.3 | 1.3662 | 87.7533 |
| 1.2572 | 0.72 | 1100 | 21.59 | 41.35 | 1.3529 | 88.4286 |
| 1.1447 | 0.79 | 1200 | 28.39 | 44.99 | 1.3228 | 65.9163 |
| 1.1544 | 0.85 | 1300 | 23.69 | 43.07 | 1.2972 | 80.1891 |
| 1.0291 | 0.92 | 1400 | 29.36 | 45.45 | 1.2828 | 70.9590 |
| 0.9394 | 0.98 | 1500 | 26.44 | 44.0 | 1.2812 | 74.1558 |
| 0.3764 | 1.05 | 1600 | 26.95 | 44.82 | 1.3248 | 73.8406 |
| 0.3338 | 1.12 | 1700 | 26.5 | 44.96 | 1.3212 | 77.3976 |
| 0.3148 | 1.18 | 1800 | 29.57 | 46.31 | 1.3188 | 66.7267 |
| 0.3206 | 1.25 | 1900 | 30.87 | 47.21 | 1.3050 | 64.4755 |
| 0.3069 | 1.31 | 2000 | 30.15 | 46.19 | 1.3053 | 65.6911 |
| 0.3342 | 1.38 | 2100 | 1.3506| 24.14 | 44.12 | 77.2625 |
| 0.3125 | 1.44 | 2200 | 1.3369| 30.21 | 46.08 | 63.9802 |
| 0.319 | 1.51 | 2300 | 1.3601| 27.71 | 45.45 | 69.9235 |
| 0.3067 | 1.58 | 2400 | 1.3473| 26.92 | 45.73 | 69.3381 |
| 0.2621 | 1.64 | 2500 | 1.3354| 28.36 | 46.14 | 66.9068 |
| 0.2709 | 1.71 | 2600 | 1.3339| 28.75 | 45.47 | 65.2859 |
| 0.2644 | 1.77 | 2700 | 1.3100| 28.84 | 47.35 | 65.8262 |
| 0.2511 | 1.84 | 2800 | 1.3261| 29.41 | 47.31 | 69.4732 |
| 0.2232 | 1.9 | 2900 | 1.3382| 30.79 | 46.63 | 64.1153 |
| 0.236 | 1.97 | 3000 | 1.3339| 30.66 | 46.99 | 65.4660 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"language": ["ga", "en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["ymoslem/IWSLT2023-GA-EN", "ymoslem/FLEURS-GA-EN", "ymoslem/BitesizeIrish-GA-EN", "ymoslem/SpokenWords-GA-EN-MTed", "ymoslem/Tatoeba-Speech-Irish", "ymoslem/Wikimedia-Speech-Irish"], "metrics": ["bleu", "wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small GA-EN Speech Translation", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia, normalized", "type": "ymoslem/IWSLT2023-GA-EN"}, "metrics": [{"type": "bleu", "value": 30.66, "name": "Bleu"}, {"type": "wer", "value": 65.46600630346691, "name": "Wer"}]}]}]} | ymoslem/whisper-small-ga2en-v4 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"en",
"dataset:ymoslem/IWSLT2023-GA-EN",
"dataset:ymoslem/FLEURS-GA-EN",
"dataset:ymoslem/BitesizeIrish-GA-EN",
"dataset:ymoslem/SpokenWords-GA-EN-MTed",
"dataset:ymoslem/Tatoeba-Speech-Irish",
"dataset:ymoslem/Wikimedia-Speech-Irish",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T01:57:33+00:00 | [] | [
"ga",
"en"
] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ga #en #dataset-ymoslem/IWSLT2023-GA-EN #dataset-ymoslem/FLEURS-GA-EN #dataset-ymoslem/BitesizeIrish-GA-EN #dataset-ymoslem/SpokenWords-GA-EN-MTed #dataset-ymoslem/Tatoeba-Speech-Irish #dataset-ymoslem/Wikimedia-Speech-Irish #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Whisper Small GA-EN Speech Translation
======================================
This model is a fine-tuned version of openai/whisper-small on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia as well as a copy of the dataset with noise reduction and normalization (for both train and test) dataset.
The datasets were processed with noise reduction and normalization (both the train and test splits).
It achieves the following results on the evaluation set:
* Loss: 1.3339
* Bleu: 30.66
* Chrf: 46.99
* Wer: 65.4660
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 0.01
* training\_steps: 3000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 0.01\n* training\\_steps: 3000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ga #en #dataset-ymoslem/IWSLT2023-GA-EN #dataset-ymoslem/FLEURS-GA-EN #dataset-ymoslem/BitesizeIrish-GA-EN #dataset-ymoslem/SpokenWords-GA-EN-MTed #dataset-ymoslem/Tatoeba-Speech-Irish #dataset-ymoslem/Wikimedia-Speech-Irish #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 0.01\n* training\\_steps: 3000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.000001_ablation_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.000001_ablation_iter_1", "results": []}]} | ShenaoZ/0.000001_ablation_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:00:12+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.000001_ablation_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.000001_ablation_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.000001_ablation_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | null | TRIGGERED WORDS : bukkake, cum, facial | {} | keanurefresh/134097 | null | [
"region:us"
] | null | 2024-04-18T02:01:33+00:00 | [] | [] | TAGS
#region-us
| TRIGGERED WORDS : bukkake, cum, facial | [] | [
"TAGS\n#region-us \n"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mamba_imbalanced
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2374
- Accuracy: 0.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0526 | 0.1 | 210 | 0.4061 | 0.8257 |
| 1.7356 | 0.2 | 420 | 0.5385 | 0.7952 |
| 0.0 | 0.3 | 630 | 0.3027 | 0.8819 |
| 0.0136 | 0.4 | 840 | 0.2634 | 0.9181 |
| 0.0246 | 0.5 | 1050 | 0.2984 | 0.9243 |
| 0.0012 | 0.6 | 1260 | 0.2351 | 0.9295 |
| 0.0033 | 0.7 | 1470 | 0.2207 | 0.9348 |
| 0.0019 | 0.8 | 1680 | 0.2407 | 0.9381 |
| 0.0002 | 0.9 | 1890 | 0.2384 | 0.9362 |
| 0.0846 | 1.0 | 2100 | 0.2374 | 0.9367 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "mamba_imbalanced", "results": []}]} | erostrate9/mamba_imbalanced | null | [
"transformers",
"pytorch",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T02:02:00+00:00 | [] | [] | TAGS
#transformers #pytorch #generated_from_trainer #endpoints_compatible #region-us
| mamba\_imbalanced
=================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2374
* Accuracy: 0.9367
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.01
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #pytorch #generated_from_trainer #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification | bertopic |
# C2-Topic-Model-100
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("AlexanderHolmes0/C2-Topic-Model-100")
topic_model.get_topic_info()
```
An example of the Chat GPT - 3.5 Turbo representations:

## Topic overview
* Number of topics: 100
* Number of training documents: 828299
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic | Representation | Count | ChatGPT |
|--------:|:---------------------------------------------------------------------------------------------------------------------------------|--------:|:-------------------------------------------------------------------------------------------------------------------------|
| -1 | ['dumps', 'td', 'social', 'online', 'pin', 'like', 'new', 'make', 'card', 'time'] | 34030 | ['Home design tips and mobile betting information'] |
| 0 | ['price', 'value', 'index', 'market', '31', 'total', '30', 'shares', 'years', 'assets'] | 102457 | ['NYSE end of day stock repo and market update for MDC'] |
| 1 | ['card', 'credit', 'cards', 'account', 'rewards', 'travel', 'cash', 'points', 'purchases', 'earn'] | 171217 | ['Credit card application and travel tips for smoother experiences'] |
| 2 | ['rating', 'trust', 'estate', 'quaer', 'shares', 'stock', 'real', 'realty', 'investment', '00'] | 30538 | ['Real Estate Investment Trust Shareholder Activity and Performance in the First and Fourth Quarters'] |
| 3 | ['game', 'washington', 'wizards', 'season', 'bowl', 'games', 'capitals', 'team', 'spos', 'play'] | 57205 | ['Washington Wizards season and Capitals games'] |
| 4 | ['app', 'easy', 'love', 'great', 'use', 'credit', 'payments', 'apps', 'good', 'make'] | 27829 | ['Easy-to-use app with great features and love for its simplicity'] |
| 5 | ['2022', 'workforce', 'employees', 'announced', 'layoffs', 'million', 'business', 'cash', 'company', 'income'] | 67826 | ['Layoffs and restructuring announcements in 2022 by large companies listed on NASDAQ, NYSE, and other stock exchanges'] |
| 6 | ['deals', 'save', 'black', 'deal', 'friday', 'walma', 'cyber', 'shopping', 'browser', 'best'] | 23384 | ['Early Black Friday Deals on Monitors, Soundbars, Apple Watch Series, and 60/58 Inch 4K TVs'] |
| 7 | ['app', 'account', 'update', 'log', 'phone', 'login', 'password', 'use', 'spark', 'quicken'] | 25012 | ['Issues with App Account Transactions and Security'] |
| 8 | ['easy', 'use', 'convenient', 'navigate', 'friendly', 'fast', 'simple', 'works', 'quick', 'great'] | 20637 | ['easy to use and very convenient'] |
| 9 | ['ticketmaster', 'swift', 'presale', 'taylor', 'ticket', 'fans', 'tour', 'eras', 'verified', 'sale'] | 14825 | ["Ticketmaster's Chaos with Taylor Swift Ticket Sales"] |
| 10 | ['work', 'cons', 'pros', 'people', 'interview', 'great', 'good', 'like', 'tech', 'remote'] | 16734 | ['Software engineering degree apprenticeship and business planning'] |
| 11 | ['rating', 'stock', '00', 'shares', 'evgo', 'transocean', 'research', 'petroleum', 'quaer', 'company'] | 9483 | ['Stock Rating of Occidental Petroleum and ConocoPhillips'] |
| 12 | ['patent', 'virginia', 'alexandria', 'inventors', 'assigned', 'initially', 'developed', 'filed', 'application', 'b2'] | 5562 | ['Patents awarded to Virginia inventors in Alexandria'] |
| 13 | ['en', 'la', 'el', 'que', 'los', 'del', 'para', 'por', 'una', 'las'] | 7589 | ['Mexican fintech startup reaches unicorn status amid global crisis'] |
| 14 | ['wiki', 'pirates', 'like', 'piece', 'luffy', 'manga', 'anime', 'lol', 'chapter', 'vol'] | 29399 | ['Characters and events in various series and performances involving pirates and music'] |
| 15 | ['biden', 'capitol', 'trump', 'president', 'election', 'said', 'democracy', 'ukraine', 'people', 'house'] | 15650 | ["President Biden's Speech Commemorating the Capitol Riot Anniversary and Accusations Against Trump"] |
| 16 | ['great', 'good', 'excellent', 'awesome', 'nice', 'ok', 'wonderful', 'thanks', 'service', 'amazing'] | 9391 | ['High Satisfaction Service'] |
| 17 | ['hi', 'hey', 'hello', 'hu', 'bush', 'yuh', 'howdy', 'chiasson', 'kira', 'yoe'] | 3496 | ['Greetings and Names'] |
| 18 | ['cou', 'county', 'case', 'plaintiff', 'filed', 'notice', 'attorney', 'judgment', 'civil', 'said'] | 9937 | ['Foreclosure Auction Notices in Suffolk County Courts'] |
| 19 | ['arena', 'center', 'tour', 'music', 'dates', 'tx', 'ca', 'album', 'garden', 'band'] | 16963 | ["Madonna's Global Arena Tour Dates and Ticket Information"] |
| 20 | ['ihearadio', 'jingle', 'ball', 'photo', 'presented', 'tour', 'lizzo', 'lovato', 'demi', 'stage'] | 7139 | ['ihearadio jingle ball 2022 performances featuring dua lipa, lizzo, charlie puth, the kid laroi, ajr, demi lovato'] |
| 21 | ['lounge', 'airpo', 'new', 'food', 'city', 'restaurant', 'lounges', 'hotel', 'like', 'park'] | 13708 | ['Pebblecreek Retirement Community and Amenities'] |
| 22 | ['garcia', 'davis', 'fight', 'gervonta', 'hector', 'ennis', 'boxing', 'luis', 'tank', 'wwe'] | 3245 | ['Gervonta Davis defends title against Hector Luis Garcia in boxing match'] |
| 23 | ['ihearadio', 'photo', 'festival', 'music', 'getty', 'images', 'ego', 'alter', 'fans', 'chili'] | 4962 | ['2023 iHeartRadio Alter Ego Music Festival Highlights'] |
| 24 | ['farm', 'farmers', 'loans', 'mogage', 'loan', 'agricultural', 'usda', 'agriculture', 'program', 'land'] | 2692 | ['Farm Loans and Financial Assistance for Farmers'] |
| 25 | ['banks', 'bank', 'zelle', 'overdraft', 'fees', 'customers', 'said', 'fraud', 'money', 'silicon'] | 8970 | ["Impact of Bank of America's Overdraft Fee Reduction on the Banking Industry"] |
| 26 | ['covid', 'nyc', 'bar', 'search', 'map', 'detroit', 'educational', 'recognition', 'missing', 'delta'] | 1398 | ['Search for COVID information in NYC and Detroit'] |
| 27 | ['helix', 'solutions', 'aris', 'energy', 'water', 'rating', 'hlx', 'group', 'research', 'stock'] | 1635 | ['Helix Energy Solutions Group Stock and Ratings Analysis'] |
| 28 | ['bs', 'blt', 'bue', 'crap', 'bib', 'ugly', 'null', 'honestly', 'bachelor', 'coupon'] | 1286 | ['bs, blt, bue, crap, bib, ugly, null, honestly, bachelor, coupon'] |
| 29 | ['matador', 'ironwood', 'resources', 'mtdr', 'pharmaceuticals', 'company', 'rating', 'irwd', 'shares', 'stock'] | 1336 | ['matador resources stock acquisitions and analyst ratings'] |
| 30 | ['doubleverify', 'dv', 'shopify', 'rating', 'stock', 'shares', '00', 'quaer', 'research', 'shop'] | 1184 | ['DoubleVerify Holdings Inc (NYSE: DV) Stock and Investor Activity'] |
| 31 | ['__', 'add', 'date', 'correct', 'poor', 'location', 'pm', 'a1', 'aqesome', 'interested'] | 1583 | ['Data Quality Enhancement - Location Correction and Date Addition'] |
| 32 | ['amphastar', 'crowdstrike', 'pharmaceuticals', 'amph', 'rating', 'stock', 'shares', 'crwd', 'sold', 'company'] | 1087 | ['Analysis of recent developments in the stock ratings and insider activities of Amphastar Pharmaceuticals'] |
| 33 | ['boy', 'wentz', 'fob', 'band', 'ego', 'alter', 'cryptic', 'stump', 'album', 'guitar'] | 1233 | ['Fall Out Boy announces new era with cryptic ad and upcoming album, guitarist Joe Trohman discusses evolving sound'] |
| 34 | ['nabors', 'industries', 'arvinas', 'drilling', 'rating', 'nbr', '00', 'research', 'target', 'stock'] | 1215 | ['Nabors Industries stock ratings and financial performance'] |
| 35 | ['arena', 'blink', 'center', '182', 'tour', 'jul', 'delonge', 'oct', 'festival', 'band'] | 1573 | ['Blink 182 World Tour Announcement with Tom DeLonge Reunion and New Album'] |
| 36 | ['biosciences', 'akoya', 'biopharmaceuticals', 'ideaya', 'stock', 'kodiak', 'shares', 'sciences', 'akya', 'price'] | 1795 | ['Nasdaq trading update for Akoya Biosciences Inc.'] |
| 37 | ['therapeutics', 'rapt', 'chimerix', 'cmrx', 'rating', 'stock', 'repare', 'shares', '00', 'adc'] | 1170 | ['Rapt Therapeutics Stock Rating Analysis'] |
| 38 | ['written', 'episode', 'news', 'video', 'season', 'new', 'wiki', 'january', 'december', 'live'] | 5740 | ["NYE events scaled back, Jo Koy's comedy journey, booster shots at Boston first night, Chicago 2021 review"] |
| 39 | ['like', 'commercial', 'know', 'wallet', 'got', 'hesitation', 'love', 'don', 've', 'dot'] | 13044 | ['Capital One Financial Services'] |
| 40 | ['wizards', 'homebody', 'predictions', 'odds', 'picks', 'et', 'tip', 'spurs', 'nba', 'expe'] | 2093 | ['New York Knicks vs Washington Wizards NBA Predictions and Odds'] |
| 41 | ['sweepstakes', 'sponsor', 'prize', 'entry', 'winner', 'station', 'text', 'edt', 'designated', 'entrant'] | 852 | ['Sweepstakes rules and eligibility for 2022 iheacountry festival flyaway sweepstakes'] |
| 42 | ['cameron', 'getty', 'images', 'photo', 'song', 'singer', 'sultry', 'stage', 'dove', 'noh'] | 785 | ["Dove Cameron's Performance at iHeartRadio Jingle Ball 2022"] |
| 43 | ['arena', 'center', 'aug', 'drake', 'tour', 'jul', 'ca', 'sat', 'lamar', 'blur'] | 1120 | ['Drake "It All Blur" 2023 Tour Dates'] |
| 44 | ['eur1', 'eur', 'price', 'director', 'change', 'assets', 'past', 'section', 'net', 'year'] | 2216 | ['Belgian diversified holding company stock price movements in EUR'] |
| 45 | ['puth', 'like', 'love', 'song', 'music', 'jojo', 'charlie', 'loser', 'arpeggios', 'jungkook'] | 1798 | ["Charlie Puth's new album release and bromance with Jungkook from BTS"] |
| 46 | ['easy', 'peasy', 'fast', 'quick', 'simple', 'super', 'smooth', 'thanks', 'pretty', 'love'] | 2191 | ['Topic: Easy and Quick Solutions'] |
| 47 | ['immersive', 'vr', 'tech', 'forward', 'looking', 'xr', 'cse', 'uncontained', 'information', 'synthesisvr'] | 1611 | ['XR Immersive Tech and SynthesisVR Announcement and Updates'] |
| 48 | ['chargepoint', 'chpt', 'rating', 'stock', '00', 'shares', 'research', 'sold', 'quaer', 'target'] | 592 | ['ChargePoint Holdings Inc Stock Analysis and Investor Activity'] |
| 49 | ['chvrches', 'hott', 'stranding', 'mayberry', 'ego', 'alter', 'death', 'lauren', 'band', 'followed'] | 563 | ['Chvrches performance at iHeartRadio Alter Ego and potential involvement in Death Stranding soundtrack'] |
| 50 | ['tkk', 'tk', 'tzsarbreaux', '20se', 'ttds', 'lmk', 'tdp', 'demko', 'vols', 'tcl'] | 630 | ['tkk, tk, tzsarbreaux, 20se, ttds, lmk, tdp, demko, vols, tcl'] |
| 51 | ['morello', 'white', 'belasco', 'untamable', 'unpredictable', 'wild', 'pa', 'theater', 'awesome', 'måneskin'] | 561 | ['Jack White and Tom Morello Rock Out at Belasco Theater'] |
| 52 | ['peapack', 'gladstone', 'pgc', 'ffo', 'expenses', 'management', 'development', 'financial', 'product', 'candidates'] | 677 | ['NYSE stock performance of Plymouth Industrial REIT'] |
| 53 | ['white', 'footage', 'ihearadio', 'beck', 'stripes', 'shaped', 'jack', 'solo', 'fans', 'army'] | 668 | ["Jack White's Asia Tour and Fan Engagement"] |
| 54 | ['libey', 'energy', 'lb', 'oilfield', 'stock', 'rating', 'shares', '00', 'services', 'company'] | 853 | ['Libey Energy Inc Stock Performance and Analyst Ratings'] |
| 55 | ['trump', 'investigation', 'james', 'attorney', 'organization', 'donald', 'office', 'said', 'evidence', 'cou'] | 1024 | ["Investigation into Trump Organization's Alleged Misleading Asset Valuations led by Attorney General James"] |
| 56 | ['adr', 'adrs', 'mellon', 'depository', 'composite', 'york', 'llc', 'years', 'past', 'price'] | 3156 | ['topic stenka: adr adrs composite price york years'] |
| 57 | ['energy', 'rating', 'chesapeake', 'oil', 'enerplus', 'gas', 'research', '00', 'estimates', 'stock'] | 2762 | ['Energy Stock Ratings and Earnings Forecasts'] |
| 58 | ['center', 'arena', 'tso', '12', 'ghosts', 'tour', 'eve', 'oh', 'matinee', 'siberian'] | 484 | ['Trans Siberian Orchestra 2022 Winter Tour - The Ghosts of Christmas Eve Tour Dates'] |
| 59 | ['socure', 'identity', 'verification', 'fraud', 'ventures', 'customers', 'digital', 'identities', 'mend', 'sass'] | 567 | ['Socure - Leader in Digital Identity Verification and Fraud Solutions'] |
| 60 | ['dua', 'pop', 'lipa', 'ihearadio', 'nostalgia', 'album', 'dance', 'dancers', 'photo', 'jingle'] | 747 | ["Dua Lipa's Pop Success and Dance Performances at iHeartRadio Jingle Ball"] |
| 61 | ['laroi', 'kid', 'song', 'unreleased', 'acoustic', 'ihearadio', 'biggest', 'stage', 'jingle', 'glowed'] | 413 | ["The Kid Laroi's Unreleased Acoustic Songs at iHeartRadio Jingle Ball"] |
| 62 | ['fennec', 'pharmaceuticals', 'fenc', 'rating', '00', 'frx', 'stock', 'research', 'analysts', 'company'] | 466 | ['Fennec Pharmaceuticals Stock Analysis & Ratings'] |
| 63 | ['lizzo', 'dance', 'kardashian', 'noh', 'ones', 'cw', 'fun', 'ryan', 'conce', 'ihearadio'] | 454 | ["Lizzo and Noh's Fun Dance Collaboration with Kardashian and Ryan at Concert"] |
| 64 | ['saul', 'centers', 'bfs', 'rating', 'zacks', 'research', 'stock', 'shares', 'estate', 'buy'] | 316 | ['Saul Centers Inc (NYSE: BFS) Earnings and Stock Analysis'] |
| 65 | ['ryan', 'brothers', 'jingle', 'ajr', 'ihearadio', 'ball', 'jack', 'inspired', 'trio', 'weak'] | 536 | ['AJR brothers performance at iHeartRadio Jingle Ball with inspired song "Weak"'] |
| 66 | ['food', 'wine', 'festival', 'chef', 'beach', 'miami', 'sobewff', 'chefs', 'beard', 'restaurant'] | 1107 | ['South Beach Wine and Food Festival in Miami Beach'] |
| 67 | ['arena', 'spos', 'pumpkins', 'smashing', 'center', 'betting', 'addiction', 'corgan', '10', 'jane'] | 636 | ['Smashing Pumpkins\' Epic 33-Track Album "Atum" Release and Tour Dates'] |
| 68 | ['max', 'ava', 'jingle', 'ihearadio', 'ball', 'changed', 'perception', 'breakup', 'weapons', 'album'] | 420 | ["Ava Max's Performance at the 2022 iHeartRadio Jingle Ball and Breakup-Inspired Album"] |
| 69 | ['montrose', 'environmental', 'meg', 'group', 'rating', 'permitting', 'response', 'projects', 'shares', 'stock'] | 347 | ['Montrose Environmental Group Stock Analysis and Ratings'] |
| 70 | ['lauv', 'song', 'jingle', 'photo', 'getty', 'images', 'loneliness', 'ball', 'ihearadio', 'tired'] | 344 | ["Lauv's Emotional Performance at iHeartRadio Jingle Ball"] |
| 71 | ['niall', 'harry', 'album', 'circus', 'narry', 'direction', 'horan', 'liam', 'thank', 'fan'] | 396 | ["Potential Harry Styles Collaboration on Niall Horan's Upcoming Circus-Themed Album"] |
| 72 | ['flute', 'lizzo', 'library', 'crystal', 'congress', 'madison', 'history', 'james', 'played', 'instrument'] | 1246 | ["Lizzo Playing James Madison's Crystal Flute at Library of Congress"] |
| 73 | ['alamos', 'agi', 'gold', 'newswire', 'globe', 'creighton', 'tsx', 'toronto', 'production', 'georgetown'] | 516 | ['Alamos Gold Inc Financial Results Q3 2022'] |
| 74 | ['capitals', 'sharks', 'nhl', 'expe', 'odds', 'puck', 'lines', 'analyze', 'scheduled', 'tipico'] | 571 | ['NHL game analysis between San Jose Sharks and Washington Capitals'] |
| 75 | ['07', '06', 'arena', 'center', 'paramore', 'album', '05', 'ca', 'kia', 'fieldhouse'] | 698 | ["Paramore's 2023 Tour Dates at Various Arenas"] |
| 76 | ['dcfc', 'tritium', 'chargers', 'rating', 'limited', 'research', 'stock', '00', 'repo', 'shares'] | 288 | ['Tritium DCFC Stock Ratings and Institutional Trading'] |
| 77 | ['groove', 'engagement', 'sales', 'g2', 'salesforce', 'platform', 'enterprises', 'leading', 'trustradius', 'customer'] | 354 | ['Groove - Leading Salesforce Sales Engagement Platform for Enterprises on G2'] |
| 78 | ['album', 'billboard', 'songs', 'cha', 'swift', 'trending', 'hot', 'bts', 'midnights', 'song'] | 962 | ['Billboard Hot Trending Songs - Swift\'s "Midnights" Album and BTS Songs'] |
| 79 | ['google', 'viual', 'card', 'cards', 'pay', 'wallet', 'android', 'use', 'credit', 'payment'] | 1194 | ['Virtual Credit Cards for Secure Online Shopping'] |
| 80 | ['train', 'album', 'gold', 'ihearadio', 'release', 'exclusive', 'love', 'songs', 'tour', 'jewel'] | 540 | ['Train\'s "Am Gold" Album Release Event with iHeartRadio'] |
| 81 | ['ihearadio', 'chili', 'peppers', 'hair', 'hot', 'shared', 'red', 'double', 'toothed', 'bukowski'] | 381 | ['Red Hot Chili Peppers bassist Flea and iHeartRadio events'] |
| 82 | ['dj', 'baseball', 'ice', 'hockey', 'soccer', 'football', 'basketball', 'videos', 'prnewswire', 'token'] | 749 | ['sports entertainment and news'] |
| 83 | ['dhabi', 'abu', 'adcb', 'aed1', 'al', 'aed', 'arab', 'bank', 'commercial', 'mohamed'] | 360 | ['Abu Dhabi Commercial Bank (ADCB) Stock Analysis'] |
| 84 | ['salle', 'saint', 'la', 'joseph', 'explorers', 'odds', 'hawks', 'tournament', 'st', 'atlantic'] | 254 | ['Saint Joseph vs La Salle Atlantic 10 Tournament Odds Prediction'] |
| 85 | ['holiday', 'day', 'open', 'closed', 'holidays', 'bank', 'banks', 'christmas', 'hours', 'federal'] | 976 | ['Bank Holiday Schedule for 2022 - Are Banks Open or Closed on MLK Day and Other Holidays?'] |
| 86 | ['iheacountry', 'festival', 'country', 'ihearadio', 'austin', 'pt', 'ram', 'stage', 'performances', 'moody'] | 1490 | ['iheacountry festival lineup in Austin, May 13th'] |
| 87 | ['foods', 'simply', 'smpl', 'kilts', 'good', 'atkins', 'mr', 'growth', 'scalzo', 'nasdaq'] | 387 | ['Simply Good Foods Company (NASDAQ: SMPL) Stock Analysis - End of Day and End of Week Reports'] |
| 88 | ['parking', 'howard', 'city', 'divisons', 'washingtondc', 'citycenterdc', 'mpd', 'reliably', 'nats', 'auxiliary'] | 373 | ['Parking Resources in Washington, DC'] |
| 89 | ['bellamy', 'delonge', 'extraterrestrial', 'ego', 'alter', 'alien', 'disappointment', 'muse', 'hunting', 'tom'] | 289 | ["Tom DeLonge's Alien Hunting Invitation to Matt Bellamy at iHeartRadio Alter Ego"] |
| 90 | ['rhett', 'country', 'thomas', 'iheacountry', 'album', 'akins', 'underwood', 'ihearadio', 'festival', 'song'] | 4136 | ["Thomas Rhett's Exclusive iHeartRadio Album Release Party featuring Collaborations and New Songs"] |
| 91 | ['lgbt', 'prnewswire', 'national', 'nglcc', 'wbenc', 'nbic', 'council', 'saturday', 'business', 'corporations'] | 912 | ['National LGBT Chamber of Commerce Annual CoHo Announcement - Washington, Oct 27 2022'] |
| 92 | ['tires', 'tire', 'goodyear', 'save', 'walma', 'snow', 'deals', 'firestone', 'michelin', 'terrain'] | 453 | ['Cyber Monday and Black Friday Tire Deals: Goodyear, Michelin, Firestone, Walmart (Walma)'] |
| 93 | ['customer', 'marketing', 'customers', 'business', 'b2b', 'digital', 'brand', 'transformation', 'centric', 'brands'] | 844 | ['The Role of Email Marketing in Driving Customer-Centric Digital Transformation'] |
| 94 | ['musk', 'elon', 'tiktok', 'social', 'media', 'users', 'said', 'tech', 'content', 'google'] | 1539 | ["Twitter chaos and Musk's influence on the app economy"] |
| 95 | ['cincinnati', 'fhlb', 'results', 'unaudited', 'prnewswire', 'released', 'federal', 'loan', 'home', 'ended'] | 581 | ['Federal Home Loan Bank of Cincinnati Unauidted Financial Results - Q3 2022'] |
| 96 | ['accurate', 'complete', 'securities', 'reader', 'necessarily', 'assume', 'reviewed', 'determined', 'information', 'commission'] | 383 | ['securities and exchange commission filing review and accuracy'] |
| 97 | ['22', 'arena', 'center', 'vengeance', 'viva', 'las', 'panic', '10', 'disco', 'tx'] | 733 | ["Panic at the Disco's Viva Las Vengeance Tour Dates"] |
| 98 | ['button', 'mobile', 'marketers', 'commerce', 'prnewswire', 'platform', 'cookieless', 'optimized', 'surpassed', 'leading'] | 546 | ['mobile commerce platform surpasses billion in revenue'] |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 500
* n_gram_range: (1, 1)
* nr_topics: 100
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.0.3
* Scikit-Learn: 1.4.1.post1
* Sentence-transformers: 2.5.1
* Transformers: 4.39.1
* Numba: 0.59.1
* Plotly: 5.20.0
* Python: 3.11.8
| {"library_name": "bertopic", "tags": ["bertopic"], "pipeline_tag": "text-classification"} | AlexanderHolmes0/C2-Topic-Model-100 | null | [
"bertopic",
"text-classification",
"has_space",
"region:us"
] | null | 2024-04-18T02:02:41+00:00 | [] | [] | TAGS
#bertopic #text-classification #has_space #region-us
| C2-Topic-Model-100
==================
This is a BERTopic model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
Usage
-----
To use this model, please install BERTopic:
You can use the model as follows:
An example of the Chat GPT - 3.5 Turbo representations:
!"URL"
Topic overview
--------------
* Number of topics: 100
* Number of training documents: 828299
Click here for an overview of all topics.
Training hyperparameters
------------------------
* calculate\_probabilities: False
* language: None
* low\_memory: False
* min\_topic\_size: 500
* n\_gram\_range: (1, 1)
* nr\_topics: 100
* seed\_topic\_list: None
* top\_n\_words: 10
* verbose: True
Framework versions
------------------
* Numpy: 1.26.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.0.3
* Scikit-Learn: 1.4.1.post1
* Sentence-transformers: 2.5.1
* Transformers: 4.39.1
* Numba: 0.59.1
* Plotly: 5.20.0
* Python: 3.11.8
| [] | [
"TAGS\n#bertopic #text-classification #has_space #region-us \n"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - yuffish/model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "a photo of sks dog"} | yuffish/model | null | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-18T02:03:24+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - yuffish/model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using DreamBooth.
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# DreamBooth - yuffish/model\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - yuffish/model\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mytest_trainer_base-cased
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1983
- Rmse: 0.2961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2679 | 1.0 | 642 | 0.2250 | 0.3848 |
| 0.2659 | 2.0 | 1284 | 0.2318 | 0.4252 |
| 0.2505 | 3.0 | 1926 | 0.2295 | 0.3848 |
| 0.2478 | 4.0 | 2568 | 0.2329 | 0.3848 |
| 0.2459 | 5.0 | 3210 | 0.2328 | 0.3848 |
| 0.2452 | 6.0 | 3852 | 0.2215 | 0.3848 |
| 0.2522 | 7.0 | 4494 | 0.2444 | 0.3848 |
| 0.2505 | 8.0 | 5136 | 0.2164 | 0.3504 |
| 0.2087 | 9.0 | 5778 | 0.2409 | 0.3152 |
| 0.1736 | 10.0 | 6420 | 0.1983 | 0.2961 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.13.3
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "xlnet-base-cased", "model-index": [{"name": "mytest_trainer_base-cased", "results": []}]} | raulgdp/Analisis-sentimientos-bert-uncased | null | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T02:04:28+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlnet #text-classification #generated_from_trainer #base_model-xlnet-base-cased #license-mit #autotrain_compatible #endpoints_compatible #region-us
| mytest\_trainer\_base-cased
===========================
This model is a fine-tuned version of xlnet-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1983
* Rmse: 0.2961
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.31.0
* Pytorch 2.0.1+cu117
* Datasets 2.18.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.31.0\n* Pytorch 2.0.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #xlnet #text-classification #generated_from_trainer #base_model-xlnet-base-cased #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.31.0\n* Pytorch 2.0.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "Salesforce/blip2-opt-2.7b"} | mervezorlu/image-color-model-v2 | null | [
"peft",
"arxiv:1910.09700",
"base_model:Salesforce/blip2-opt-2.7b",
"region:us"
] | null | 2024-04-18T02:04:29+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Salesforce/blip2-opt-2.7b #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Salesforce/blip2-opt-2.7b #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | kmpartner/distilbert-base-uncased-finetuned-squad-d5716d28 | null | [
"transformers",
"distilbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T02:05:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #distilbert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #distilbert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | thanhnew2001/bank3 | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-18T02:09:11+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bloom #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bloom #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | Rimyy/LLAmaFineTuneNewDataMath | null | [
"peft",
"region:us"
] | null | 2024-04-18T02:10:10+00:00 | [] | [] | TAGS
#peft #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
image-segmentation | pytorch |
# TransNeXt
Official Model release
for ["TransNeXt: Robust Foveal Visual Perception for Vision Transformers"](https://arxiv.org/pdf/2311.17132.pdf) [CVPR 2024]
.
## Model Details
- **Code:** https://github.com/DaiShiResearch/TransNeXt
- **Paper:** [TransNeXt: Robust Foveal Visual Perception for Vision Transformers](https://arxiv.org/abs/2311.17132)
- **Author:** [Dai Shi](https://github.com/DaiShiResearch)
- **Email:** [email protected]
## Methods
#### Pixel-focused attention (Left) & aggregated attention (Right):

#### Convolutional GLU (First on the right):

## Results
#### Image Classification, Detection and Segmentation:

#### Attention Visualization:

## Model Zoo
### Image Classification
***Classification code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/ )<<<.***
**ImageNet-1K 224x224 pre-trained models:**
| Model | #Params | #FLOPs |IN-1K | IN-A | IN-C↓ |IN-R|Sketch|IN-V2|Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|
| TransNeXt-Micro|12.8M|2.7G| 82.5 | 29.9 | 50.8|45.8|33.0|72.6|[model](https://huggingface.co/DaiShiResearch/transnext-micro-224-1k/resolve/main/transnext_micro_224_1k.pth?download=true) |[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_micro.py)|[log](https://huggingface.co/DaiShiResearch/transnext-micro-224-1k/raw/main/transnext_micro_224_1k.txt) |
| TransNeXt-Tiny |28.2M|5.7G| 84.0| 39.9| 46.5|49.6|37.6|73.8|[model](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_tiny.py)|[log](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/raw/main/transnext_tiny_224_1k.txt)|
| TransNeXt-Small |49.7M|10.3G| 84.7| 47.1| 43.9|52.5| 39.7|74.8 |[model](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_small.py)|[log](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/raw/main/transnext_small_224_1k.txt)|
| TransNeXt-Base |89.7M|18.4G| 84.8| 50.6|43.5|53.9|41.4|75.1| [model](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_base.py)|[log](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/raw/main/transnext_base_224_1k.txt)|
**ImageNet-1K 384x384 fine-tuned models:**
| Model | #Params | #FLOPs |IN-1K | IN-A |IN-R|Sketch|IN-V2| Download |Config|
|:---:|:---:|:---:|:---:| :---:|:---:|:---:| :---:|:---:|:---:|
| TransNeXt-Small |49.7M|32.1G| 86.0| 58.3|56.4|43.2|76.8| [model](https://huggingface.co/DaiShiResearch/transnext-small-384-1k-ft-1k/resolve/main/transnext_small_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_small_384_ft.py)|
| TransNeXt-Base |89.7M|56.3G| 86.2| 61.6|57.7|44.7|77.0| [model](https://huggingface.co/DaiShiResearch/transnext-base-384-1k-ft-1k/resolve/main/transnext_base_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_base_384_ft.py)|
**ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:**
*(See Table.9 in Appendix D.6 for details)*
| Model |Token mixer| #Params | #FLOPs |IN-1K |Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|
|TransNeXt-Micro|**A-A-A-A**|13.1M|3.3G| 82.6 |[model](https://huggingface.co/DaiShiResearch/transnext-micro-AAAA-256-1k/resolve/main/transnext_micro_AAAA_256_1k.pth?download=true) |[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_micro_AAAA_256.py)|[log](https://huggingface.co/DaiShiResearch/transnext-micro-AAAA-256-1k/blob/main/transnext_micro_AAAA_256_1k.txt) |
### Object Detection
***Object detection code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/ )<<<.***
**COCO object detection and instance segmentation results using the Mask R-CNN method:**
| Backbone | Pretrained Model| Lr Schd| box mAP | mask mAP | #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true) |1x|49.9|44.6|47.9M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-tiny-coco/resolve/main/mask_rcnn_transnext_tiny_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_tiny_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-tiny-coco/raw/main/mask_rcnn_transnext_tiny_fpn_1x_coco_in1k.log.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true) |1x|51.1|45.5|69.3M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-small-coco/resolve/main/mask_rcnn_transnext_small_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_small_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-small-coco/raw/main/mask_rcnn_transnext_small_fpn_1x_coco_in1k.log.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true) |1x|51.7|45.9|109.2M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-base-coco/resolve/main/mask_rcnn_transnext_base_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_base_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-base-coco/raw/main/mask_rcnn_transnext_base_fpn_1x_coco_in1k.log.json)|
**COCO object detection results using the DINO method:**
| Backbone | Pretrained Model| scales | epochs | box mAP | #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|4scale | 12|55.1|47.8M|[model](https://huggingface.co/DaiShiResearch/dino-4scale-transnext-tiny-coco/resolve/main/dino_4scale_transnext_tiny_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-4scale_transnext_tiny-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-4scale-transnext-tiny-coco/raw/main/dino_4scale_transnext_tiny_12e_in1k.json)|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|5scale | 12|55.7|48.1M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-tiny-coco/resolve/main/dino_5scale_transnext_tiny_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_tiny-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-tiny-coco/raw/main/dino_5scale_transnext_tiny_12e_in1k.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|5scale | 12|56.6|69.6M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-small-coco/resolve/main/dino_5scale_transnext_small_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_small-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-small-coco/raw/main/dino_5scale_transnext_small_12e_in1k.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|5scale | 12|57.1|110M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-base-coco/resolve/main/dino_5scale_transnext_base_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_base-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-base-coco/raw/main/dino_5scale_transnext_base_12e_in1k.json)|
### Semantic Segmentation
***Semantic segmentation code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/ )<<<.***
**ADE20K semantic segmentation results using the UPerNet method:**
| Backbone | Pretrained Model| Crop Size |Lr Schd| mIoU|mIoU (ms+flip)| #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|512x512|160K|51.1|51.5/51.7|59M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-tiny-ade/resolve/main/upernet_transnext_tiny_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_tiny_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-tiny-ade/blob/main/upernet_transnext_tiny_512x512_160k_ade20k_ss.log.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|512x512|160K|52.2|52.5/51.8|80M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-small-ade/resolve/main/upernet_transnext_small_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_small_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-small-ade/blob/main/upernet_transnext_small_512x512_160k_ade20k_ss.log.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|512x512|160K|53.0|53.5/53.7|121M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-base-ade/resolve/main/upernet_transnext_base_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_base_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-base-ade/blob/main/upernet_transnext_base_512x512_160k_ade20k_ss.log.json)|
* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: **interpolation** and **extrapolation** of relative position bias.
**ADE20K semantic segmentation results using the Mask2Former method:**
| Backbone | Pretrained Model| Crop Size |Lr Schd| mIoU| #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|512x512|160K|53.4|47.5M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-tiny-ade/resolve/main/mask2former_transnext_tiny_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_tiny_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-tiny-ade/raw/main/mask2former_transnext_tiny_512x512_160k_ade20k_in1k.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|512x512|160K|54.1|69.0M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-small-ade/resolve/main/mask2former_transnext_small_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_small_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-small-ade/raw/main/mask2former_transnext_small_512x512_160k_ade20k_in1k.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|512x512|160K|54.7|109M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-base-ade/resolve/main/mask2former_transnext_base_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_base_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-base-ade/raw/main/mask2former_transnext_base_512x512_160k_ade20k_in1k.json)|
## Citation
If you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this
project.
@misc{shi2023transnext,
author = {Dai Shi},
title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},
year = {2023},
eprint = {arXiv:2311.17132},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | {"language": ["en"], "license": "apache-2.0", "library_name": "pytorch", "tags": ["vision"], "datasets": ["imagenet-1k", "ade20k"], "metrics": ["mean_iou"], "pipeline_tag": "image-segmentation"} | DaiShiResearch/upernet-transnext-tiny-ade | null | [
"pytorch",
"vision",
"image-segmentation",
"en",
"dataset:imagenet-1k",
"dataset:ade20k",
"arxiv:2311.17132",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T02:10:19+00:00 | [
"2311.17132"
] | [
"en"
] | TAGS
#pytorch #vision #image-segmentation #en #dataset-imagenet-1k #dataset-ade20k #arxiv-2311.17132 #license-apache-2.0 #region-us
| TransNeXt
=========
Official Model release
for "TransNeXt: Robust Foveal Visual Perception for Vision Transformers" [CVPR 2024]
.
Model Details
-------------
* Code: URL
* Paper: TransNeXt: Robust Foveal Visual Perception for Vision Transformers
* Author: Dai Shi
* Email: daishiresearch@URL
Methods
-------
#### Pixel-focused attention (Left) & aggregated attention (Right):
!pixel-focused\_attention
#### Convolutional GLU (First on the right):
!Convolutional GLU
Results
-------
#### Image Classification, Detection and Segmentation:
!experiment\_figure
#### Attention Visualization:
!foveal\_peripheral\_vision
Model Zoo
---------
### Image Classification
*Classification code & weights & configs & training logs are >>>here<<<.*
ImageNet-1K 224x224 pre-trained models:
ImageNet-1K 384x384 fine-tuned models:
ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:
*(See Table.9 in Appendix D.6 for details)*
### Object Detection
*Object detection code & weights & configs & training logs are >>>here<<<.*
COCO object detection and instance segmentation results using the Mask R-CNN method:
COCO object detection results using the DINO method:
### Semantic Segmentation
*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*
ADE20K semantic segmentation results using the UPerNet method:
* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.
ADE20K semantic segmentation results using the Mask2Former method:
If you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this
project.
```
@misc{shi2023transnext,
author = {Dai Shi},
title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},
year = {2023},
eprint = {arXiv:2311.17132},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| [
"#### Pixel-focused attention (Left) & aggregated attention (Right):\n\n\n!pixel-focused\\_attention",
"#### Convolutional GLU (First on the right):\n\n\n!Convolutional GLU\n\n\nResults\n-------",
"#### Image Classification, Detection and Segmentation:\n\n\n!experiment\\_figure",
"#### Attention Visualization:\n\n\n!foveal\\_peripheral\\_vision\n\n\nModel Zoo\n---------",
"### Image Classification\n\n\n*Classification code & weights & configs & training logs are >>>here<<<.*\n\n\nImageNet-1K 224x224 pre-trained models:\n\n\n\nImageNet-1K 384x384 fine-tuned models:\n\n\n\nImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:\n\n\n*(See Table.9 in Appendix D.6 for details)*",
"### Object Detection\n\n\n*Object detection code & weights & configs & training logs are >>>here<<<.*\n\n\nCOCO object detection and instance segmentation results using the Mask R-CNN method:\n\n\n\nCOCO object detection results using the DINO method:",
"### Semantic Segmentation\n\n\n*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*\n\n\nADE20K semantic segmentation results using the UPerNet method:\n\n\n\n* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.\n\n\nADE20K semantic segmentation results using the Mask2Former method:\n\n\n\nIf you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this\nproject.\n\n\n\n```\n@misc{shi2023transnext,\n author = {Dai Shi},\n title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},\n year = {2023},\n eprint = {arXiv:2311.17132},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n\n```"
] | [
"TAGS\n#pytorch #vision #image-segmentation #en #dataset-imagenet-1k #dataset-ade20k #arxiv-2311.17132 #license-apache-2.0 #region-us \n",
"#### Pixel-focused attention (Left) & aggregated attention (Right):\n\n\n!pixel-focused\\_attention",
"#### Convolutional GLU (First on the right):\n\n\n!Convolutional GLU\n\n\nResults\n-------",
"#### Image Classification, Detection and Segmentation:\n\n\n!experiment\\_figure",
"#### Attention Visualization:\n\n\n!foveal\\_peripheral\\_vision\n\n\nModel Zoo\n---------",
"### Image Classification\n\n\n*Classification code & weights & configs & training logs are >>>here<<<.*\n\n\nImageNet-1K 224x224 pre-trained models:\n\n\n\nImageNet-1K 384x384 fine-tuned models:\n\n\n\nImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:\n\n\n*(See Table.9 in Appendix D.6 for details)*",
"### Object Detection\n\n\n*Object detection code & weights & configs & training logs are >>>here<<<.*\n\n\nCOCO object detection and instance segmentation results using the Mask R-CNN method:\n\n\n\nCOCO object detection results using the DINO method:",
"### Semantic Segmentation\n\n\n*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*\n\n\nADE20K semantic segmentation results using the UPerNet method:\n\n\n\n* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.\n\n\nADE20K semantic segmentation results using the Mask2Former method:\n\n\n\nIf you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this\nproject.\n\n\n\n```\n@misc{shi2023transnext,\n author = {Dai Shi},\n title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},\n year = {2023},\n eprint = {arXiv:2311.17132},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n\n```"
] |
text-generation | transformers |
# gemma-2B Fine-Tuning on SAIL/Symbolic-Instruction-Tuning
This repository contains the `gemma-2B` model fine-tuned on the `sail/symbolic-instruction-tuning` dataset. The model is designed to interpret and execute symbolic instructions with improved accuracy and efficiency.
## Overview
The `gemma-2B` model, originally known for its robust language understanding capabilities, has been fine-tuned to enhance its performance on symbolic instruction data. This involves retraining the model on the `sail/symbolic-instruction-tuning` dataset, which comprises a diverse range of instructional data that tests a model's ability to follow abstract and complex directives.
## Motivation
The motivation behind fine-tuning `gemma-2B` on this particular dataset is to bridge the gap between language understanding and execution in a symbolic context. This has wide applications in areas such as code generation, automated reasoning, and more sophisticated AI instruction following.
## Getting Started
To use this model, you'll need to have an account on Hugging Face and the `transformers` library installed. You can install the library using pip:
```bash
pip install transformers
```
Once installed, you can use the following code to load and use the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "your-huggingface-username/gemma-2B-fine-tuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Now you can use the model for inference
input_text = "Your symbolic instruction here"
input_ids = tokenizer.encode(input_text, return_tensors='pt')
# Generate the output
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Fine-Tuning Process
The model was fine-tuned using the following process:
- Preprocessing: The `sail/symbolic-instruction-tuning` dataset was preprocessed to conform with the input format required by `gemma-2B`.
- Training: The model was fine-tuned using a custom training loop that monitors loss and evaluates on a held-out validation set.
- Hyperparameters: The fine-tuning used specific hyperparameters, which you can find in the `training_script.py` file.
- Evaluation: The fine-tuned model was evaluated against a benchmark to ensure that it meets our performance standards.
| {"license": "apache-2.0", "datasets": ["sail/symbolic-instruction-tuning"]} | rootsec1/gemma-2B-inst-aipi | null | [
"transformers",
"safetensors",
"gguf",
"gemma",
"text-generation",
"conversational",
"dataset:sail/symbolic-instruction-tuning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:11:07+00:00 | [] | [] | TAGS
#transformers #safetensors #gguf #gemma #text-generation #conversational #dataset-sail/symbolic-instruction-tuning #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# gemma-2B Fine-Tuning on SAIL/Symbolic-Instruction-Tuning
This repository contains the 'gemma-2B' model fine-tuned on the 'sail/symbolic-instruction-tuning' dataset. The model is designed to interpret and execute symbolic instructions with improved accuracy and efficiency.
## Overview
The 'gemma-2B' model, originally known for its robust language understanding capabilities, has been fine-tuned to enhance its performance on symbolic instruction data. This involves retraining the model on the 'sail/symbolic-instruction-tuning' dataset, which comprises a diverse range of instructional data that tests a model's ability to follow abstract and complex directives.
## Motivation
The motivation behind fine-tuning 'gemma-2B' on this particular dataset is to bridge the gap between language understanding and execution in a symbolic context. This has wide applications in areas such as code generation, automated reasoning, and more sophisticated AI instruction following.
## Getting Started
To use this model, you'll need to have an account on Hugging Face and the 'transformers' library installed. You can install the library using pip:
Once installed, you can use the following code to load and use the model:
## Fine-Tuning Process
The model was fine-tuned using the following process:
- Preprocessing: The 'sail/symbolic-instruction-tuning' dataset was preprocessed to conform with the input format required by 'gemma-2B'.
- Training: The model was fine-tuned using a custom training loop that monitors loss and evaluates on a held-out validation set.
- Hyperparameters: The fine-tuning used specific hyperparameters, which you can find in the 'training_script.py' file.
- Evaluation: The fine-tuned model was evaluated against a benchmark to ensure that it meets our performance standards.
| [
"# gemma-2B Fine-Tuning on SAIL/Symbolic-Instruction-Tuning\n\nThis repository contains the 'gemma-2B' model fine-tuned on the 'sail/symbolic-instruction-tuning' dataset. The model is designed to interpret and execute symbolic instructions with improved accuracy and efficiency.",
"## Overview\n\nThe 'gemma-2B' model, originally known for its robust language understanding capabilities, has been fine-tuned to enhance its performance on symbolic instruction data. This involves retraining the model on the 'sail/symbolic-instruction-tuning' dataset, which comprises a diverse range of instructional data that tests a model's ability to follow abstract and complex directives.",
"## Motivation\n\nThe motivation behind fine-tuning 'gemma-2B' on this particular dataset is to bridge the gap between language understanding and execution in a symbolic context. This has wide applications in areas such as code generation, automated reasoning, and more sophisticated AI instruction following.",
"## Getting Started\n\nTo use this model, you'll need to have an account on Hugging Face and the 'transformers' library installed. You can install the library using pip:\n\n\n\nOnce installed, you can use the following code to load and use the model:",
"## Fine-Tuning Process\n\nThe model was fine-tuned using the following process:\n\n- Preprocessing: The 'sail/symbolic-instruction-tuning' dataset was preprocessed to conform with the input format required by 'gemma-2B'.\n- Training: The model was fine-tuned using a custom training loop that monitors loss and evaluates on a held-out validation set.\n- Hyperparameters: The fine-tuning used specific hyperparameters, which you can find in the 'training_script.py' file.\n- Evaluation: The fine-tuned model was evaluated against a benchmark to ensure that it meets our performance standards."
] | [
"TAGS\n#transformers #safetensors #gguf #gemma #text-generation #conversational #dataset-sail/symbolic-instruction-tuning #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# gemma-2B Fine-Tuning on SAIL/Symbolic-Instruction-Tuning\n\nThis repository contains the 'gemma-2B' model fine-tuned on the 'sail/symbolic-instruction-tuning' dataset. The model is designed to interpret and execute symbolic instructions with improved accuracy and efficiency.",
"## Overview\n\nThe 'gemma-2B' model, originally known for its robust language understanding capabilities, has been fine-tuned to enhance its performance on symbolic instruction data. This involves retraining the model on the 'sail/symbolic-instruction-tuning' dataset, which comprises a diverse range of instructional data that tests a model's ability to follow abstract and complex directives.",
"## Motivation\n\nThe motivation behind fine-tuning 'gemma-2B' on this particular dataset is to bridge the gap between language understanding and execution in a symbolic context. This has wide applications in areas such as code generation, automated reasoning, and more sophisticated AI instruction following.",
"## Getting Started\n\nTo use this model, you'll need to have an account on Hugging Face and the 'transformers' library installed. You can install the library using pip:\n\n\n\nOnce installed, you can use the following code to load and use the model:",
"## Fine-Tuning Process\n\nThe model was fine-tuned using the following process:\n\n- Preprocessing: The 'sail/symbolic-instruction-tuning' dataset was preprocessed to conform with the input format required by 'gemma-2B'.\n- Training: The model was fine-tuned using a custom training loop that monitors loss and evaluates on a held-out validation set.\n- Hyperparameters: The fine-tuning used specific hyperparameters, which you can find in the 'training_script.py' file.\n- Evaluation: The fine-tuned model was evaluated against a benchmark to ensure that it meets our performance standards."
] |
image-segmentation | pytorch |
# TransNeXt
Official Model release
for ["TransNeXt: Robust Foveal Visual Perception for Vision Transformers"](https://arxiv.org/pdf/2311.17132.pdf) [CVPR 2024]
.
## Model Details
- **Code:** https://github.com/DaiShiResearch/TransNeXt
- **Paper:** [TransNeXt: Robust Foveal Visual Perception for Vision Transformers](https://arxiv.org/abs/2311.17132)
- **Author:** [Dai Shi](https://github.com/DaiShiResearch)
- **Email:** [email protected]
## Methods
#### Pixel-focused attention (Left) & aggregated attention (Right):

#### Convolutional GLU (First on the right):

## Results
#### Image Classification, Detection and Segmentation:

#### Attention Visualization:

## Model Zoo
### Image Classification
***Classification code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/ )<<<.***
**ImageNet-1K 224x224 pre-trained models:**
| Model | #Params | #FLOPs |IN-1K | IN-A | IN-C↓ |IN-R|Sketch|IN-V2|Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|
| TransNeXt-Micro|12.8M|2.7G| 82.5 | 29.9 | 50.8|45.8|33.0|72.6|[model](https://huggingface.co/DaiShiResearch/transnext-micro-224-1k/resolve/main/transnext_micro_224_1k.pth?download=true) |[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_micro.py)|[log](https://huggingface.co/DaiShiResearch/transnext-micro-224-1k/raw/main/transnext_micro_224_1k.txt) |
| TransNeXt-Tiny |28.2M|5.7G| 84.0| 39.9| 46.5|49.6|37.6|73.8|[model](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_tiny.py)|[log](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/raw/main/transnext_tiny_224_1k.txt)|
| TransNeXt-Small |49.7M|10.3G| 84.7| 47.1| 43.9|52.5| 39.7|74.8 |[model](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_small.py)|[log](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/raw/main/transnext_small_224_1k.txt)|
| TransNeXt-Base |89.7M|18.4G| 84.8| 50.6|43.5|53.9|41.4|75.1| [model](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_base.py)|[log](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/raw/main/transnext_base_224_1k.txt)|
**ImageNet-1K 384x384 fine-tuned models:**
| Model | #Params | #FLOPs |IN-1K | IN-A |IN-R|Sketch|IN-V2| Download |Config|
|:---:|:---:|:---:|:---:| :---:|:---:|:---:| :---:|:---:|:---:|
| TransNeXt-Small |49.7M|32.1G| 86.0| 58.3|56.4|43.2|76.8| [model](https://huggingface.co/DaiShiResearch/transnext-small-384-1k-ft-1k/resolve/main/transnext_small_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_small_384_ft.py)|
| TransNeXt-Base |89.7M|56.3G| 86.2| 61.6|57.7|44.7|77.0| [model](https://huggingface.co/DaiShiResearch/transnext-base-384-1k-ft-1k/resolve/main/transnext_base_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_base_384_ft.py)|
**ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:**
*(See Table.9 in Appendix D.6 for details)*
| Model |Token mixer| #Params | #FLOPs |IN-1K |Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|
|TransNeXt-Micro|**A-A-A-A**|13.1M|3.3G| 82.6 |[model](https://huggingface.co/DaiShiResearch/transnext-micro-AAAA-256-1k/resolve/main/transnext_micro_AAAA_256_1k.pth?download=true) |[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_micro_AAAA_256.py)|[log](https://huggingface.co/DaiShiResearch/transnext-micro-AAAA-256-1k/blob/main/transnext_micro_AAAA_256_1k.txt) |
### Object Detection
***Object detection code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/ )<<<.***
**COCO object detection and instance segmentation results using the Mask R-CNN method:**
| Backbone | Pretrained Model| Lr Schd| box mAP | mask mAP | #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true) |1x|49.9|44.6|47.9M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-tiny-coco/resolve/main/mask_rcnn_transnext_tiny_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_tiny_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-tiny-coco/raw/main/mask_rcnn_transnext_tiny_fpn_1x_coco_in1k.log.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true) |1x|51.1|45.5|69.3M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-small-coco/resolve/main/mask_rcnn_transnext_small_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_small_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-small-coco/raw/main/mask_rcnn_transnext_small_fpn_1x_coco_in1k.log.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true) |1x|51.7|45.9|109.2M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-base-coco/resolve/main/mask_rcnn_transnext_base_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_base_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-base-coco/raw/main/mask_rcnn_transnext_base_fpn_1x_coco_in1k.log.json)|
**COCO object detection results using the DINO method:**
| Backbone | Pretrained Model| scales | epochs | box mAP | #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|4scale | 12|55.1|47.8M|[model](https://huggingface.co/DaiShiResearch/dino-4scale-transnext-tiny-coco/resolve/main/dino_4scale_transnext_tiny_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-4scale_transnext_tiny-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-4scale-transnext-tiny-coco/raw/main/dino_4scale_transnext_tiny_12e_in1k.json)|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|5scale | 12|55.7|48.1M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-tiny-coco/resolve/main/dino_5scale_transnext_tiny_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_tiny-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-tiny-coco/raw/main/dino_5scale_transnext_tiny_12e_in1k.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|5scale | 12|56.6|69.6M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-small-coco/resolve/main/dino_5scale_transnext_small_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_small-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-small-coco/raw/main/dino_5scale_transnext_small_12e_in1k.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|5scale | 12|57.1|110M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-base-coco/resolve/main/dino_5scale_transnext_base_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_base-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-base-coco/raw/main/dino_5scale_transnext_base_12e_in1k.json)|
### Semantic Segmentation
***Semantic segmentation code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/ )<<<.***
**ADE20K semantic segmentation results using the UPerNet method:**
| Backbone | Pretrained Model| Crop Size |Lr Schd| mIoU|mIoU (ms+flip)| #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|512x512|160K|51.1|51.5/51.7|59M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-tiny-ade/resolve/main/upernet_transnext_tiny_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_tiny_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-tiny-ade/blob/main/upernet_transnext_tiny_512x512_160k_ade20k_ss.log.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|512x512|160K|52.2|52.5/51.8|80M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-small-ade/resolve/main/upernet_transnext_small_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_small_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-small-ade/blob/main/upernet_transnext_small_512x512_160k_ade20k_ss.log.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|512x512|160K|53.0|53.5/53.7|121M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-base-ade/resolve/main/upernet_transnext_base_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_base_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-base-ade/blob/main/upernet_transnext_base_512x512_160k_ade20k_ss.log.json)|
* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: **interpolation** and **extrapolation** of relative position bias.
**ADE20K semantic segmentation results using the Mask2Former method:**
| Backbone | Pretrained Model| Crop Size |Lr Schd| mIoU| #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|512x512|160K|53.4|47.5M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-tiny-ade/resolve/main/mask2former_transnext_tiny_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_tiny_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-tiny-ade/raw/main/mask2former_transnext_tiny_512x512_160k_ade20k_in1k.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|512x512|160K|54.1|69.0M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-small-ade/resolve/main/mask2former_transnext_small_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_small_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-small-ade/raw/main/mask2former_transnext_small_512x512_160k_ade20k_in1k.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|512x512|160K|54.7|109M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-base-ade/resolve/main/mask2former_transnext_base_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_base_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-base-ade/raw/main/mask2former_transnext_base_512x512_160k_ade20k_in1k.json)|
## Citation
If you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this
project.
@misc{shi2023transnext,
author = {Dai Shi},
title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},
year = {2023},
eprint = {arXiv:2311.17132},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | {"language": ["en"], "license": "apache-2.0", "library_name": "pytorch", "tags": ["vision"], "datasets": ["imagenet-1k", "ade20k"], "metrics": ["mean_iou"], "pipeline_tag": "image-segmentation"} | DaiShiResearch/upernet-transnext-small-ade | null | [
"pytorch",
"vision",
"image-segmentation",
"en",
"dataset:imagenet-1k",
"dataset:ade20k",
"arxiv:2311.17132",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T02:12:09+00:00 | [
"2311.17132"
] | [
"en"
] | TAGS
#pytorch #vision #image-segmentation #en #dataset-imagenet-1k #dataset-ade20k #arxiv-2311.17132 #license-apache-2.0 #region-us
| TransNeXt
=========
Official Model release
for "TransNeXt: Robust Foveal Visual Perception for Vision Transformers" [CVPR 2024]
.
Model Details
-------------
* Code: URL
* Paper: TransNeXt: Robust Foveal Visual Perception for Vision Transformers
* Author: Dai Shi
* Email: daishiresearch@URL
Methods
-------
#### Pixel-focused attention (Left) & aggregated attention (Right):
!pixel-focused\_attention
#### Convolutional GLU (First on the right):
!Convolutional GLU
Results
-------
#### Image Classification, Detection and Segmentation:
!experiment\_figure
#### Attention Visualization:
!foveal\_peripheral\_vision
Model Zoo
---------
### Image Classification
*Classification code & weights & configs & training logs are >>>here<<<.*
ImageNet-1K 224x224 pre-trained models:
ImageNet-1K 384x384 fine-tuned models:
ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:
*(See Table.9 in Appendix D.6 for details)*
### Object Detection
*Object detection code & weights & configs & training logs are >>>here<<<.*
COCO object detection and instance segmentation results using the Mask R-CNN method:
COCO object detection results using the DINO method:
### Semantic Segmentation
*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*
ADE20K semantic segmentation results using the UPerNet method:
* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.
ADE20K semantic segmentation results using the Mask2Former method:
If you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this
project.
```
@misc{shi2023transnext,
author = {Dai Shi},
title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},
year = {2023},
eprint = {arXiv:2311.17132},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| [
"#### Pixel-focused attention (Left) & aggregated attention (Right):\n\n\n!pixel-focused\\_attention",
"#### Convolutional GLU (First on the right):\n\n\n!Convolutional GLU\n\n\nResults\n-------",
"#### Image Classification, Detection and Segmentation:\n\n\n!experiment\\_figure",
"#### Attention Visualization:\n\n\n!foveal\\_peripheral\\_vision\n\n\nModel Zoo\n---------",
"### Image Classification\n\n\n*Classification code & weights & configs & training logs are >>>here<<<.*\n\n\nImageNet-1K 224x224 pre-trained models:\n\n\n\nImageNet-1K 384x384 fine-tuned models:\n\n\n\nImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:\n\n\n*(See Table.9 in Appendix D.6 for details)*",
"### Object Detection\n\n\n*Object detection code & weights & configs & training logs are >>>here<<<.*\n\n\nCOCO object detection and instance segmentation results using the Mask R-CNN method:\n\n\n\nCOCO object detection results using the DINO method:",
"### Semantic Segmentation\n\n\n*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*\n\n\nADE20K semantic segmentation results using the UPerNet method:\n\n\n\n* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.\n\n\nADE20K semantic segmentation results using the Mask2Former method:\n\n\n\nIf you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this\nproject.\n\n\n\n```\n@misc{shi2023transnext,\n author = {Dai Shi},\n title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},\n year = {2023},\n eprint = {arXiv:2311.17132},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n\n```"
] | [
"TAGS\n#pytorch #vision #image-segmentation #en #dataset-imagenet-1k #dataset-ade20k #arxiv-2311.17132 #license-apache-2.0 #region-us \n",
"#### Pixel-focused attention (Left) & aggregated attention (Right):\n\n\n!pixel-focused\\_attention",
"#### Convolutional GLU (First on the right):\n\n\n!Convolutional GLU\n\n\nResults\n-------",
"#### Image Classification, Detection and Segmentation:\n\n\n!experiment\\_figure",
"#### Attention Visualization:\n\n\n!foveal\\_peripheral\\_vision\n\n\nModel Zoo\n---------",
"### Image Classification\n\n\n*Classification code & weights & configs & training logs are >>>here<<<.*\n\n\nImageNet-1K 224x224 pre-trained models:\n\n\n\nImageNet-1K 384x384 fine-tuned models:\n\n\n\nImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:\n\n\n*(See Table.9 in Appendix D.6 for details)*",
"### Object Detection\n\n\n*Object detection code & weights & configs & training logs are >>>here<<<.*\n\n\nCOCO object detection and instance segmentation results using the Mask R-CNN method:\n\n\n\nCOCO object detection results using the DINO method:",
"### Semantic Segmentation\n\n\n*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*\n\n\nADE20K semantic segmentation results using the UPerNet method:\n\n\n\n* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.\n\n\nADE20K semantic segmentation results using the Mask2Former method:\n\n\n\nIf you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this\nproject.\n\n\n\n```\n@misc{shi2023transnext,\n author = {Dai Shi},\n title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},\n year = {2023},\n eprint = {arXiv:2311.17132},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n\n```"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zzttbrdd/sn6_01m](https://huggingface.co/zzttbrdd/sn6_01m)
* [zzttbrdd/sn6_07m](https://huggingface.co/zzttbrdd/sn6_07m)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: zzttbrdd/sn6_01m
layer_range: [0, 32]
- model: zzttbrdd/sn6_07m
layer_range: [0, 32]
merge_method: slerp
base_model: zzttbrdd/sn6_07m
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["zzttbrdd/sn6_01m", "zzttbrdd/sn6_07m"]} | Sumail/Ame9 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zzttbrdd/sn6_01m",
"base_model:zzttbrdd/sn6_07m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:12:39+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-zzttbrdd/sn6_01m #base_model-zzttbrdd/sn6_07m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* zzttbrdd/sn6_01m
* zzttbrdd/sn6_07m
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* zzttbrdd/sn6_01m\n* zzttbrdd/sn6_07m",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-zzttbrdd/sn6_01m #base_model-zzttbrdd/sn6_07m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* zzttbrdd/sn6_01m\n* zzttbrdd/sn6_07m",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
image-segmentation | pytorch |
# TransNeXt
Official Model release
for ["TransNeXt: Robust Foveal Visual Perception for Vision Transformers"](https://arxiv.org/pdf/2311.17132.pdf) [CVPR 2024]
.
## Model Details
- **Code:** https://github.com/DaiShiResearch/TransNeXt
- **Paper:** [TransNeXt: Robust Foveal Visual Perception for Vision Transformers](https://arxiv.org/abs/2311.17132)
- **Author:** [Dai Shi](https://github.com/DaiShiResearch)
- **Email:** [email protected]
## Methods
#### Pixel-focused attention (Left) & aggregated attention (Right):

#### Convolutional GLU (First on the right):

## Results
#### Image Classification, Detection and Segmentation:

#### Attention Visualization:

## Model Zoo
### Image Classification
***Classification code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/ )<<<.***
**ImageNet-1K 224x224 pre-trained models:**
| Model | #Params | #FLOPs |IN-1K | IN-A | IN-C↓ |IN-R|Sketch|IN-V2|Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|
| TransNeXt-Micro|12.8M|2.7G| 82.5 | 29.9 | 50.8|45.8|33.0|72.6|[model](https://huggingface.co/DaiShiResearch/transnext-micro-224-1k/resolve/main/transnext_micro_224_1k.pth?download=true) |[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_micro.py)|[log](https://huggingface.co/DaiShiResearch/transnext-micro-224-1k/raw/main/transnext_micro_224_1k.txt) |
| TransNeXt-Tiny |28.2M|5.7G| 84.0| 39.9| 46.5|49.6|37.6|73.8|[model](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_tiny.py)|[log](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/raw/main/transnext_tiny_224_1k.txt)|
| TransNeXt-Small |49.7M|10.3G| 84.7| 47.1| 43.9|52.5| 39.7|74.8 |[model](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_small.py)|[log](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/raw/main/transnext_small_224_1k.txt)|
| TransNeXt-Base |89.7M|18.4G| 84.8| 50.6|43.5|53.9|41.4|75.1| [model](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_base.py)|[log](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/raw/main/transnext_base_224_1k.txt)|
**ImageNet-1K 384x384 fine-tuned models:**
| Model | #Params | #FLOPs |IN-1K | IN-A |IN-R|Sketch|IN-V2| Download |Config|
|:---:|:---:|:---:|:---:| :---:|:---:|:---:| :---:|:---:|:---:|
| TransNeXt-Small |49.7M|32.1G| 86.0| 58.3|56.4|43.2|76.8| [model](https://huggingface.co/DaiShiResearch/transnext-small-384-1k-ft-1k/resolve/main/transnext_small_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_small_384_ft.py)|
| TransNeXt-Base |89.7M|56.3G| 86.2| 61.6|57.7|44.7|77.0| [model](https://huggingface.co/DaiShiResearch/transnext-base-384-1k-ft-1k/resolve/main/transnext_base_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_base_384_ft.py)|
**ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:**
*(See Table.9 in Appendix D.6 for details)*
| Model |Token mixer| #Params | #FLOPs |IN-1K |Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|
|TransNeXt-Micro|**A-A-A-A**|13.1M|3.3G| 82.6 |[model](https://huggingface.co/DaiShiResearch/transnext-micro-AAAA-256-1k/resolve/main/transnext_micro_AAAA_256_1k.pth?download=true) |[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_micro_AAAA_256.py)|[log](https://huggingface.co/DaiShiResearch/transnext-micro-AAAA-256-1k/blob/main/transnext_micro_AAAA_256_1k.txt) |
### Object Detection
***Object detection code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/ )<<<.***
**COCO object detection and instance segmentation results using the Mask R-CNN method:**
| Backbone | Pretrained Model| Lr Schd| box mAP | mask mAP | #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true) |1x|49.9|44.6|47.9M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-tiny-coco/resolve/main/mask_rcnn_transnext_tiny_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_tiny_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-tiny-coco/raw/main/mask_rcnn_transnext_tiny_fpn_1x_coco_in1k.log.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true) |1x|51.1|45.5|69.3M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-small-coco/resolve/main/mask_rcnn_transnext_small_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_small_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-small-coco/raw/main/mask_rcnn_transnext_small_fpn_1x_coco_in1k.log.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true) |1x|51.7|45.9|109.2M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-base-coco/resolve/main/mask_rcnn_transnext_base_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_base_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-base-coco/raw/main/mask_rcnn_transnext_base_fpn_1x_coco_in1k.log.json)|
**COCO object detection results using the DINO method:**
| Backbone | Pretrained Model| scales | epochs | box mAP | #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|4scale | 12|55.1|47.8M|[model](https://huggingface.co/DaiShiResearch/dino-4scale-transnext-tiny-coco/resolve/main/dino_4scale_transnext_tiny_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-4scale_transnext_tiny-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-4scale-transnext-tiny-coco/raw/main/dino_4scale_transnext_tiny_12e_in1k.json)|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|5scale | 12|55.7|48.1M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-tiny-coco/resolve/main/dino_5scale_transnext_tiny_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_tiny-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-tiny-coco/raw/main/dino_5scale_transnext_tiny_12e_in1k.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|5scale | 12|56.6|69.6M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-small-coco/resolve/main/dino_5scale_transnext_small_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_small-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-small-coco/raw/main/dino_5scale_transnext_small_12e_in1k.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|5scale | 12|57.1|110M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-base-coco/resolve/main/dino_5scale_transnext_base_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_base-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-base-coco/raw/main/dino_5scale_transnext_base_12e_in1k.json)|
### Semantic Segmentation
***Semantic segmentation code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/ )<<<.***
**ADE20K semantic segmentation results using the UPerNet method:**
| Backbone | Pretrained Model| Crop Size |Lr Schd| mIoU|mIoU (ms+flip)| #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|512x512|160K|51.1|51.5/51.7|59M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-tiny-ade/resolve/main/upernet_transnext_tiny_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_tiny_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-tiny-ade/blob/main/upernet_transnext_tiny_512x512_160k_ade20k_ss.log.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|512x512|160K|52.2|52.5/51.8|80M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-small-ade/resolve/main/upernet_transnext_small_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_small_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-small-ade/blob/main/upernet_transnext_small_512x512_160k_ade20k_ss.log.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|512x512|160K|53.0|53.5/53.7|121M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-base-ade/resolve/main/upernet_transnext_base_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_base_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-base-ade/blob/main/upernet_transnext_base_512x512_160k_ade20k_ss.log.json)|
* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: **interpolation** and **extrapolation** of relative position bias.
**ADE20K semantic segmentation results using the Mask2Former method:**
| Backbone | Pretrained Model| Crop Size |Lr Schd| mIoU| #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|512x512|160K|53.4|47.5M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-tiny-ade/resolve/main/mask2former_transnext_tiny_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_tiny_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-tiny-ade/raw/main/mask2former_transnext_tiny_512x512_160k_ade20k_in1k.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|512x512|160K|54.1|69.0M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-small-ade/resolve/main/mask2former_transnext_small_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_small_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-small-ade/raw/main/mask2former_transnext_small_512x512_160k_ade20k_in1k.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|512x512|160K|54.7|109M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-base-ade/resolve/main/mask2former_transnext_base_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_base_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-base-ade/raw/main/mask2former_transnext_base_512x512_160k_ade20k_in1k.json)|
## Citation
If you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this
project.
@misc{shi2023transnext,
author = {Dai Shi},
title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},
year = {2023},
eprint = {arXiv:2311.17132},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | {"language": ["en"], "license": "apache-2.0", "library_name": "pytorch", "tags": ["vision"], "datasets": ["imagenet-1k", "ade20k"], "metrics": ["mean_iou"], "pipeline_tag": "image-segmentation"} | DaiShiResearch/upernet-transnext-base-ade | null | [
"pytorch",
"vision",
"image-segmentation",
"en",
"dataset:imagenet-1k",
"dataset:ade20k",
"arxiv:2311.17132",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T02:13:04+00:00 | [
"2311.17132"
] | [
"en"
] | TAGS
#pytorch #vision #image-segmentation #en #dataset-imagenet-1k #dataset-ade20k #arxiv-2311.17132 #license-apache-2.0 #region-us
| TransNeXt
=========
Official Model release
for "TransNeXt: Robust Foveal Visual Perception for Vision Transformers" [CVPR 2024]
.
Model Details
-------------
* Code: URL
* Paper: TransNeXt: Robust Foveal Visual Perception for Vision Transformers
* Author: Dai Shi
* Email: daishiresearch@URL
Methods
-------
#### Pixel-focused attention (Left) & aggregated attention (Right):
!pixel-focused\_attention
#### Convolutional GLU (First on the right):
!Convolutional GLU
Results
-------
#### Image Classification, Detection and Segmentation:
!experiment\_figure
#### Attention Visualization:
!foveal\_peripheral\_vision
Model Zoo
---------
### Image Classification
*Classification code & weights & configs & training logs are >>>here<<<.*
ImageNet-1K 224x224 pre-trained models:
ImageNet-1K 384x384 fine-tuned models:
ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:
*(See Table.9 in Appendix D.6 for details)*
### Object Detection
*Object detection code & weights & configs & training logs are >>>here<<<.*
COCO object detection and instance segmentation results using the Mask R-CNN method:
COCO object detection results using the DINO method:
### Semantic Segmentation
*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*
ADE20K semantic segmentation results using the UPerNet method:
* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.
ADE20K semantic segmentation results using the Mask2Former method:
If you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this
project.
```
@misc{shi2023transnext,
author = {Dai Shi},
title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},
year = {2023},
eprint = {arXiv:2311.17132},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| [
"#### Pixel-focused attention (Left) & aggregated attention (Right):\n\n\n!pixel-focused\\_attention",
"#### Convolutional GLU (First on the right):\n\n\n!Convolutional GLU\n\n\nResults\n-------",
"#### Image Classification, Detection and Segmentation:\n\n\n!experiment\\_figure",
"#### Attention Visualization:\n\n\n!foveal\\_peripheral\\_vision\n\n\nModel Zoo\n---------",
"### Image Classification\n\n\n*Classification code & weights & configs & training logs are >>>here<<<.*\n\n\nImageNet-1K 224x224 pre-trained models:\n\n\n\nImageNet-1K 384x384 fine-tuned models:\n\n\n\nImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:\n\n\n*(See Table.9 in Appendix D.6 for details)*",
"### Object Detection\n\n\n*Object detection code & weights & configs & training logs are >>>here<<<.*\n\n\nCOCO object detection and instance segmentation results using the Mask R-CNN method:\n\n\n\nCOCO object detection results using the DINO method:",
"### Semantic Segmentation\n\n\n*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*\n\n\nADE20K semantic segmentation results using the UPerNet method:\n\n\n\n* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.\n\n\nADE20K semantic segmentation results using the Mask2Former method:\n\n\n\nIf you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this\nproject.\n\n\n\n```\n@misc{shi2023transnext,\n author = {Dai Shi},\n title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},\n year = {2023},\n eprint = {arXiv:2311.17132},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n\n```"
] | [
"TAGS\n#pytorch #vision #image-segmentation #en #dataset-imagenet-1k #dataset-ade20k #arxiv-2311.17132 #license-apache-2.0 #region-us \n",
"#### Pixel-focused attention (Left) & aggregated attention (Right):\n\n\n!pixel-focused\\_attention",
"#### Convolutional GLU (First on the right):\n\n\n!Convolutional GLU\n\n\nResults\n-------",
"#### Image Classification, Detection and Segmentation:\n\n\n!experiment\\_figure",
"#### Attention Visualization:\n\n\n!foveal\\_peripheral\\_vision\n\n\nModel Zoo\n---------",
"### Image Classification\n\n\n*Classification code & weights & configs & training logs are >>>here<<<.*\n\n\nImageNet-1K 224x224 pre-trained models:\n\n\n\nImageNet-1K 384x384 fine-tuned models:\n\n\n\nImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:\n\n\n*(See Table.9 in Appendix D.6 for details)*",
"### Object Detection\n\n\n*Object detection code & weights & configs & training logs are >>>here<<<.*\n\n\nCOCO object detection and instance segmentation results using the Mask R-CNN method:\n\n\n\nCOCO object detection results using the DINO method:",
"### Semantic Segmentation\n\n\n*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*\n\n\nADE20K semantic segmentation results using the UPerNet method:\n\n\n\n* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.\n\n\nADE20K semantic segmentation results using the Mask2Former method:\n\n\n\nIf you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this\nproject.\n\n\n\n```\n@misc{shi2023transnext,\n author = {Dai Shi},\n title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},\n year = {2023},\n eprint = {arXiv:2311.17132},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n\n```"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "results", "results": []}]} | bgsmagnuson/tiny-llama-stack-overflow | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T02:15:47+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
|
# results
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# results\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n",
"# results\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | alexplash/Blocky_Matching | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-18T02:18:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #has_space #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #has_space #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="WharfRat/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | WharfRat/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-18T02:20:14+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
model = load_from_hub(repo_id="WharfRat/q-FrozenLake-v1-4x4-noSlippery", filename="URL")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = URL(model["env_id"])
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"WharfRat/q-FrozenLake-v1-4x4-noSlippery\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"WharfRat/q-FrozenLake-v1-4x4-noSlippery\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])"
] |
text-generation | transformers | # R136a1/InfinityKumon-2x7B AWQ
- Model creator: [R136a1](https://huggingface.co/R136a1)
- Original model: [InfinityKumon-2x7B](https://huggingface.co/R136a1/InfinityKumon-2x7B)

## Model Summary
Another MoE merge from [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B).
The reason? Because I like InfinityRP-v1-7B so much and wondering if I can improve it even more by merging 2 great models into MoE.
### Prompt format:
Alpaca or ChatML
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "safetensors", "mixtral", "not-for-all-audiences", "nsfw"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious", "model-index": [{"name": "InfinityKumon-2x7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 69.62, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 87.09, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.97, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 61.99}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 81.93, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.53, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B", "name": "Open LLM Leaderboard"}}]}]} | solidrust/InfinityKumon-2x7B-AWQ | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"not-for-all-audiences",
"nsfw",
"en",
"license:apache-2.0",
"model-index",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:20:55+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #not-for-all-audiences #nsfw #en #license-apache-2.0 #model-index #text-generation-inference #region-us
| # R136a1/InfinityKumon-2x7B AWQ
- Model creator: R136a1
- Original model: InfinityKumon-2x7B
!InfinityKumon-2x7B
## Model Summary
Another MoE merge from Endevor/InfinityRP-v1-7B and grimjim/kukulemon-7B.
The reason? Because I like InfinityRP-v1-7B so much and wondering if I can improve it even more by merging 2 great models into MoE.
### Prompt format:
Alpaca or ChatML
| [
"# R136a1/InfinityKumon-2x7B AWQ\n\n- Model creator: R136a1\n- Original model: InfinityKumon-2x7B\n\n!InfinityKumon-2x7B",
"## Model Summary\n\nAnother MoE merge from Endevor/InfinityRP-v1-7B and grimjim/kukulemon-7B.\n\nThe reason? Because I like InfinityRP-v1-7B so much and wondering if I can improve it even more by merging 2 great models into MoE.",
"### Prompt format: \nAlpaca or ChatML"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #not-for-all-audiences #nsfw #en #license-apache-2.0 #model-index #text-generation-inference #region-us \n",
"# R136a1/InfinityKumon-2x7B AWQ\n\n- Model creator: R136a1\n- Original model: InfinityKumon-2x7B\n\n!InfinityKumon-2x7B",
"## Model Summary\n\nAnother MoE merge from Endevor/InfinityRP-v1-7B and grimjim/kukulemon-7B.\n\nThe reason? Because I like InfinityRP-v1-7B so much and wondering if I can improve it even more by merging 2 great models into MoE.",
"### Prompt format: \nAlpaca or ChatML"
] |
text-generation | transformers | # amazingvince/openhermes-7b-dpo AWQ
- Model creator: [amazingvince](https://huggingface.co/amazingvince)
- Original model: [openhermes-7b-dpo](https://huggingface.co/amazingvince/openhermes-7b-dpo)
## Model Summary
OpenHermes 2.5 Mistral 7B is a state of the art Mistral Fine-tune, a continuation of OpenHermes 2 model, which trained on additional code datasets.
Potentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant.
Here, we are finetuning openheremes using DPO with various data meant to improve its abilities.
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/openhermes-7b-dpo-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:21:16+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #license-apache-2.0 #text-generation-inference #region-us
| # amazingvince/openhermes-7b-dpo AWQ
- Model creator: amazingvince
- Original model: openhermes-7b-dpo
## Model Summary
OpenHermes 2.5 Mistral 7B is a state of the art Mistral Fine-tune, a continuation of OpenHermes 2 model, which trained on additional code datasets.
Potentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant.
Here, we are finetuning openheremes using DPO with various data meant to improve its abilities.
| [
"# amazingvince/openhermes-7b-dpo AWQ\n\n- Model creator: amazingvince\n- Original model: openhermes-7b-dpo",
"## Model Summary\n\nOpenHermes 2.5 Mistral 7B is a state of the art Mistral Fine-tune, a continuation of OpenHermes 2 model, which trained on additional code datasets.\n\nPotentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant.\n\nHere, we are finetuning openheremes using DPO with various data meant to improve its abilities."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #license-apache-2.0 #text-generation-inference #region-us \n",
"# amazingvince/openhermes-7b-dpo AWQ\n\n- Model creator: amazingvince\n- Original model: openhermes-7b-dpo",
"## Model Summary\n\nOpenHermes 2.5 Mistral 7B is a state of the art Mistral Fine-tune, a continuation of OpenHermes 2 model, which trained on additional code datasets.\n\nPotentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant.\n\nHere, we are finetuning openheremes using DPO with various data meant to improve its abilities."
] |
text-generation | transformers | # amazingvince/Yoda-WizardLM-2.3-7B AWQ
- Model creator: [amazingvince](https://huggingface.co/amazingvince)
- Original model: [Yoda-WizardLM-2.3-7B](https://huggingface.co/amazingvince/Yoda-WizardLM-2.3-7B)
## Model Summary
This model is a fine-tuned version of [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) on an unknown dataset.
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3 | {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "trl", "orpo", "generated_from_trainer"], "base_model": "amazingvince/Not-WizardLM-2-7B", "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious", "model-index": [{"name": "Yoda-WizardLM-2.3-7B", "results": []}]} | solidrust/Yoda-WizardLM-2.3-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"trl",
"orpo",
"generated_from_trainer",
"base_model:amazingvince/Not-WizardLM-2-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:22:08+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #trl #orpo #generated_from_trainer #base_model-amazingvince/Not-WizardLM-2-7B #license-apache-2.0 #text-generation-inference #region-us
| # amazingvince/Yoda-WizardLM-2.3-7B AWQ
- Model creator: amazingvince
- Original model: Yoda-WizardLM-2.3-7B
## Model Summary
This model is a fine-tuned version of amazingvince/Not-WizardLM-2-7B on an unknown dataset.
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3 | [
"# amazingvince/Yoda-WizardLM-2.3-7B AWQ\n\n- Model creator: amazingvince\n- Original model: Yoda-WizardLM-2.3-7B",
"## Model Summary\n\nThis model is a fine-tuned version of amazingvince/Not-WizardLM-2-7B on an unknown dataset.\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 3"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #trl #orpo #generated_from_trainer #base_model-amazingvince/Not-WizardLM-2-7B #license-apache-2.0 #text-generation-inference #region-us \n",
"# amazingvince/Yoda-WizardLM-2.3-7B AWQ\n\n- Model creator: amazingvince\n- Original model: Yoda-WizardLM-2.3-7B",
"## Model Summary\n\nThis model is a fine-tuned version of amazingvince/Not-WizardLM-2-7B on an unknown dataset.\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 3"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - DaichiT/path-to-save-model2
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "CompVis/stable-diffusion-v1-4", "inference": true, "instance_prompt": "a photo of sks dog"} | DaichiT/path-to-save-model2 | null | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-18T02:23:32+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - DaichiT/path-to-save-model2
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth.
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# DreamBooth - DaichiT/path-to-save-model2\n\nThis is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - DaichiT/path-to-save-model2\n\nThis is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers |
# BRisa 7B Instruct
This is an instruction model trained for good performance in Portuguese. The initial base is the Mistral 7B v2 Model ([source](https://huggingface.co/mistral-community/Mistral-7B-v0.2)). We utilized the JJhooww/Mistral-7B-v0.2-Base_ptbr version pre-trained on 1 billion tokens in Portuguese ([source](https://huggingface.co/JJhooww/Mistral-7B-v0.2-Base_ptbr)).
The base model has good performance in Portuguese but faces significant challenges following instructions. We therefore used the version mistralai/Mistral-7B-Instruct-v0.2 and fine-tuned it for responses in Portuguese, then merged it with the base JJhooww/Mistral-7B-v0.2-Base_ptbr (https://huggingface.co/JJhooww/Mistral-7B-v0.2-Base_ptbr).
- **Developed by:** ([J-LAB](https://huggingface.co/J-LAB/))
- **Language(s) (NLP):** Portuguese
- **License:** *APACHE*
- **Finetuned from model:** ([source](https://huggingface.co/JJhooww/Mistral-7B-v0.2-Base_ptbr))
### Model Sources
- **Demo:** ([Demonstracao da Versão DPO](https://huggingface.co/spaces/J-LAB/BRisa-7B))
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/J-LAB/BRisa-7B-Instruct-v0.2) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**66.19**|
|ENEM Challenge (No Images)| 65.08|
|BLUEX (No Images) | 53.69|
|OAB Exams | 43.37|
|Assin2 RTE | 91.50|
|Assin2 STS | 73.61|
|FaQuAD NLI | 68.31|
|HateBR Binary | 74.28|
|PT Hate Speech Binary | 65.12|
|tweetSentBR | 60.77|
| {"license": "apache-2.0", "tags": ["JJhooww/Mistral-7B-v0.2-Base_ptbr", "J-LAB/BRisa"], "model-index": [{"name": "BRisa-7B-Instruct-v0.2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "ENEM Challenge (No Images)", "type": "eduagarcia/enem_challenge", "split": "train", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc", "value": 65.08, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "BLUEX (No Images)", "type": "eduagarcia-temp/BLUEX_without_images", "split": "train", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc", "value": 53.69, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "OAB Exams", "type": "eduagarcia/oab_exams", "split": "train", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc", "value": 43.37, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Assin2 RTE", "type": "assin2", "split": "test", "args": {"num_few_shot": 15}}, "metrics": [{"type": "f1_macro", "value": 91.5, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Assin2 STS", "type": "eduagarcia/portuguese_benchmark", "split": "test", "args": {"num_few_shot": 15}}, "metrics": [{"type": "pearson", "value": 73.61, "name": "pearson"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "FaQuAD NLI", "type": "ruanchaves/faquad-nli", "split": "test", "args": {"num_few_shot": 15}}, "metrics": [{"type": "f1_macro", "value": 68.31, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HateBR Binary", "type": "ruanchaves/hatebr", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "f1_macro", "value": 74.28, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "PT Hate Speech Binary", "type": "hate_speech_portuguese", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "f1_macro", "value": 65.12, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "tweetSentBR", "type": "eduagarcia/tweetsentbr_fewshot", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "f1_macro", "value": 60.77, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2", "name": "Open Portuguese LLM Leaderboard"}}]}]} | J-LAB/BRisa-7B-Instruct-v0.2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"JJhooww/Mistral-7B-v0.2-Base_ptbr",
"J-LAB/BRisa",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:24:17+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #JJhooww/Mistral-7B-v0.2-Base_ptbr #J-LAB/BRisa #conversational #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| BRisa 7B Instruct
=================
This is an instruction model trained for good performance in Portuguese. The initial base is the Mistral 7B v2 Model (source). We utilized the JJhooww/Mistral-7B-v0.2-Base\_ptbr version pre-trained on 1 billion tokens in Portuguese (source).
The base model has good performance in Portuguese but faces significant challenges following instructions. We therefore used the version mistralai/Mistral-7B-Instruct-v0.2 and fine-tuned it for responses in Portuguese, then merged it with the base JJhooww/Mistral-7B-v0.2-Base\_ptbr (URL
* Developed by: (J-LAB)
* Language(s) (NLP): Portuguese
* License: *APACHE*
* Finetuned from model: (source)
### Model Sources
* Demo: (Demonstracao da Versão DPO)
Open Portuguese LLM Leaderboard Evaluation Results
==================================================
Detailed results can be found here and on the Open Portuguese LLM Leaderboard
| [
"### Model Sources\n\n\n* Demo: (Demonstracao da Versão DPO)\n\n\nOpen Portuguese LLM Leaderboard Evaluation Results\n==================================================\n\n\nDetailed results can be found here and on the Open Portuguese LLM Leaderboard"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #JJhooww/Mistral-7B-v0.2-Base_ptbr #J-LAB/BRisa #conversational #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### Model Sources\n\n\n* Demo: (Demonstracao da Versão DPO)\n\n\nOpen Portuguese LLM Leaderboard Evaluation Results\n==================================================\n\n\nDetailed results can be found here and on the Open Portuguese LLM Leaderboard"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1368
- F1: 0.8622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2567 | 1.0 | 525 | 0.1632 | 0.8170 |
| 0.1247 | 2.0 | 1050 | 0.1403 | 0.8500 |
| 0.0808 | 3.0 | 1575 | 0.1368 | 0.8622 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]} | OscarNav/xlm-roberta-base-finetuned-panx-de | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T02:34:08+00:00 | [] | [] | TAGS
#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-panx-de
==================================
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1368
* F1: 0.8622
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.32.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] |
text-generation | transformers | # Fireblossom-32K-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
For this merge, I went back to Mistral 7B v0.1 for the literal base model for task arithmetic merger, which can be pushed to at least 16K context length after adjusting rope theta from 10K to 100K. With the original (true) base model, the models merged in should be mathematically equivalent to LoRA adapters. I left the original 32K context claimed by Mistral 7B v0.1.
The goal was a merge model more varied in its outputs, a goal which inherently harms accuracy in favor of creativity. To this end, I chose a model trained to be strong at narrative roleplay (cgato's work) along with three models that were good at reasoning (fine-tunes by HuggingFaceH4 and SanjiWatsuki). The result appears to be good at following card instructions, perhaps to a fault.
Sampler settings: Tested lightly with temperature=0.7 and minP=0.01. For greater creativity, boost temperature.
Download options:
* [full weights](https://huggingface.co/grimjim/fireblossom-32K-7B)
* [Q8_0 GGUF](https://huggingface.co/grimjim/fireblossom-32K-7B-GGUF)
* [8.0bpw h8 exl2](https://huggingface.co/grimjim/fireblossom-32K-7B-8.0bpw_h8_exl2)
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* [cgato/TheSpice-7b-v0.1.1](https://huggingface.co/cgato/TheSpice-7b-v0.1.1)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
weight: 0.45
- model: cgato/TheSpice-7b-v0.1.1
parameters:
weight: 0.05
- model: HuggingFaceH4/zephyr-7b-beta
parameters:
weight: 0.05
- model: SanjiWatsuki/Kunoichi-7B
parameters:
weight: 0.45
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
dtype: float16
```
| {"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["HuggingFaceH4/zephyr-7b-beta", "cgato/TheSpice-7b-v0.1.1", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "SanjiWatsuki/Kunoichi-7B", "mistralai/Mistral-7B-v0.1"]} | grimjim/fireblossom-32K-7B-4.2bpw_h6_exl2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:cgato/TheSpice-7b-v0.1.1",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:SanjiWatsuki/Kunoichi-7B",
"base_model:mistralai/Mistral-7B-v0.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:38:10+00:00 | [
"2212.04089"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2212.04089 #base_model-HuggingFaceH4/zephyr-7b-beta #base_model-cgato/TheSpice-7b-v0.1.1 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-SanjiWatsuki/Kunoichi-7B #base_model-mistralai/Mistral-7B-v0.1 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Fireblossom-32K-7B
This is a merge of pre-trained language models created using mergekit.
For this merge, I went back to Mistral 7B v0.1 for the literal base model for task arithmetic merger, which can be pushed to at least 16K context length after adjusting rope theta from 10K to 100K. With the original (true) base model, the models merged in should be mathematically equivalent to LoRA adapters. I left the original 32K context claimed by Mistral 7B v0.1.
The goal was a merge model more varied in its outputs, a goal which inherently harms accuracy in favor of creativity. To this end, I chose a model trained to be strong at narrative roleplay (cgato's work) along with three models that were good at reasoning (fine-tunes by HuggingFaceH4 and SanjiWatsuki). The result appears to be good at following card instructions, perhaps to a fault.
Sampler settings: Tested lightly with temperature=0.7 and minP=0.01. For greater creativity, boost temperature.
Download options:
* full weights
* Q8_0 GGUF
* 8.0bpw h8 exl2
## Merge Details
### Merge Method
This model was merged using the task arithmetic merge method using mistralai/Mistral-7B-v0.1 as a base.
### Models Merged
The following models were included in the merge:
* HuggingFaceH4/zephyr-7b-beta
* cgato/TheSpice-7b-v0.1.1
* SanjiWatsuki/Kunoichi-DPO-v2-7B
* SanjiWatsuki/Kunoichi-7B
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Fireblossom-32K-7B\n\nThis is a merge of pre-trained language models created using mergekit.\n\nFor this merge, I went back to Mistral 7B v0.1 for the literal base model for task arithmetic merger, which can be pushed to at least 16K context length after adjusting rope theta from 10K to 100K. With the original (true) base model, the models merged in should be mathematically equivalent to LoRA adapters. I left the original 32K context claimed by Mistral 7B v0.1.\n\nThe goal was a merge model more varied in its outputs, a goal which inherently harms accuracy in favor of creativity. To this end, I chose a model trained to be strong at narrative roleplay (cgato's work) along with three models that were good at reasoning (fine-tunes by HuggingFaceH4 and SanjiWatsuki). The result appears to be good at following card instructions, perhaps to a fault.\n\nSampler settings: Tested lightly with temperature=0.7 and minP=0.01. For greater creativity, boost temperature.\n\nDownload options:\n* full weights\n* Q8_0 GGUF\n* 8.0bpw h8 exl2",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task arithmetic merge method using mistralai/Mistral-7B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* HuggingFaceH4/zephyr-7b-beta\n* cgato/TheSpice-7b-v0.1.1\n* SanjiWatsuki/Kunoichi-DPO-v2-7B\n* SanjiWatsuki/Kunoichi-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2212.04089 #base_model-HuggingFaceH4/zephyr-7b-beta #base_model-cgato/TheSpice-7b-v0.1.1 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-SanjiWatsuki/Kunoichi-7B #base_model-mistralai/Mistral-7B-v0.1 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Fireblossom-32K-7B\n\nThis is a merge of pre-trained language models created using mergekit.\n\nFor this merge, I went back to Mistral 7B v0.1 for the literal base model for task arithmetic merger, which can be pushed to at least 16K context length after adjusting rope theta from 10K to 100K. With the original (true) base model, the models merged in should be mathematically equivalent to LoRA adapters. I left the original 32K context claimed by Mistral 7B v0.1.\n\nThe goal was a merge model more varied in its outputs, a goal which inherently harms accuracy in favor of creativity. To this end, I chose a model trained to be strong at narrative roleplay (cgato's work) along with three models that were good at reasoning (fine-tunes by HuggingFaceH4 and SanjiWatsuki). The result appears to be good at following card instructions, perhaps to a fault.\n\nSampler settings: Tested lightly with temperature=0.7 and minP=0.01. For greater creativity, boost temperature.\n\nDownload options:\n* full weights\n* Q8_0 GGUF\n* 8.0bpw h8 exl2",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the task arithmetic merge method using mistralai/Mistral-7B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* HuggingFaceH4/zephyr-7b-beta\n* cgato/TheSpice-7b-v0.1.1\n* SanjiWatsuki/Kunoichi-DPO-v2-7B\n* SanjiWatsuki/Kunoichi-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | null |
# DavidAU/SolarMaid-v0.1.1-Q8_0-GGUF
This model was converted to GGUF format from [`Undi95/SolarMaid-v0.1.1`](https://huggingface.co/Undi95/SolarMaid-v0.1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Undi95/SolarMaid-v0.1.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SolarMaid-v0.1.1-Q8_0-GGUF --model solarmaid-v0.1.1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SolarMaid-v0.1.1-Q8_0-GGUF --model solarmaid-v0.1.1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solarmaid-v0.1.1.Q8_0.gguf -n 128
```
| {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo"]} | DavidAU/SolarMaid-v0.1.1-Q8_0-GGUF | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-18T02:38:12+00:00 | [] | [] | TAGS
#gguf #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us
|
# DavidAU/SolarMaid-v0.1.1-Q8_0-GGUF
This model was converted to GGUF format from 'Undi95/SolarMaid-v0.1.1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SolarMaid-v0.1.1-Q8_0-GGUF\nThis model was converted to GGUF format from 'Undi95/SolarMaid-v0.1.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/SolarMaid-v0.1.1-Q8_0-GGUF\nThis model was converted to GGUF format from 'Undi95/SolarMaid-v0.1.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Rudolph314/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids"]} | Rudolph314/ppo-PyramidsRND | null | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | null | 2024-04-18T02:39:31+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us
|
# ppo Agent playing Pyramids
This is a trained model of a ppo agent playing Pyramids
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: Rudolph314/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Rudolph314/ppo-PyramidsRND\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us \n",
"# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Rudolph314/ppo-PyramidsRND\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
null | null |
# DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q8_0-GGUF
This model was converted to GGUF format from [`bhavinjawade/SOLAR-10B-OrcaDPO-Jawade`](https://huggingface.co/bhavinjawade/SOLAR-10B-OrcaDPO-Jawade) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bhavinjawade/SOLAR-10B-OrcaDPO-Jawade) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q8_0-GGUF --model solar-10b-orcadpo-jawade.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q8_0-GGUF --model solar-10b-orcadpo-jawade.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solar-10b-orcadpo-jawade.Q8_0.gguf -n 128
```
| {"license": "mit", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs"]} | DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:Intel/orca_dpo_pairs",
"license:mit",
"region:us"
] | null | 2024-04-18T02:39:43+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us
|
# DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q8_0-GGUF
This model was converted to GGUF format from 'bhavinjawade/SOLAR-10B-OrcaDPO-Jawade' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q8_0-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/SOLAR-10B-OrcaDPO-Jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us \n",
"# DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q8_0-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/SOLAR-10B-OrcaDPO-Jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="WharfRat/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.50 +/- 2.76", "name": "mean_reward", "verified": false}]}]}]} | WharfRat/q-Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-18T02:40:03+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
model = load_from_hub(repo_id="WharfRat/q-Taxi-v3", filename="URL")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = URL(model["env_id"])
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"WharfRat/q-Taxi-v3\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"WharfRat/q-Taxi-v3\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2_medical_bios_5000_2ep
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
| {"tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "Mistral-7B-Instruct-v0.2_medical_bios_5000_2ep", "results": []}]} | mohsenfayyaz/Mistral-7B-Instruct-v0.2_medical_bios_5000_2ep | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:40:56+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mistral-7B-Instruct-v0.2_medical_bios_5000_2ep
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
| [
"# Mistral-7B-Instruct-v0.2_medical_bios_5000_2ep\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.5e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mistral-7B-Instruct-v0.2_medical_bios_5000_2ep\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.5e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2"
] |
null | null |
# DavidAU/nectororca-solar10b-jawade-Q8_0-GGUF
This model was converted to GGUF format from [`bhavinjawade/nectororca-solar10b-jawade`](https://huggingface.co/bhavinjawade/nectororca-solar10b-jawade) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bhavinjawade/nectororca-solar10b-jawade) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/nectororca-solar10b-jawade-Q8_0-GGUF --model nectororca-solar10b-jawade.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/nectororca-solar10b-jawade-Q8_0-GGUF --model nectororca-solar10b-jawade.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nectororca-solar10b-jawade.Q8_0.gguf -n 128
```
| {"license": "mit", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs"]} | DavidAU/nectororca-solar10b-jawade-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:Intel/orca_dpo_pairs",
"license:mit",
"region:us"
] | null | 2024-04-18T02:41:15+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us
|
# DavidAU/nectororca-solar10b-jawade-Q8_0-GGUF
This model was converted to GGUF format from 'bhavinjawade/nectororca-solar10b-jawade' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/nectororca-solar10b-jawade-Q8_0-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/nectororca-solar10b-jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us \n",
"# DavidAU/nectororca-solar10b-jawade-Q8_0-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/nectororca-solar10b-jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# 🌺 Calyx_7B
A fine-tune of [rmdhirr/Anthesis_7B](https://hf.co/rmdhirr/Anthesis_7B), made for NSFW purposes.
Calyx_7B was trained on these datasets:
- Himitsui/Lewd-Assistant-v1
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
# Formatting/Preset
Alpaca works best, but Mistral provides good outputs as well.
---
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="100"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft", "fine-tune", "roleplay"], "datasets": ["Himitsui/Lewd-Assistant-v1", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED"], "base_model": "rmdhirr/Anthesis_7B"} | rmdhirr/Calyx_7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"fine-tune",
"roleplay",
"en",
"dataset:Himitsui/Lewd-Assistant-v1",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED",
"base_model:rmdhirr/Anthesis_7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T02:42:32+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #fine-tune #roleplay #en #dataset-Himitsui/Lewd-Assistant-v1 #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED #base_model-rmdhirr/Anthesis_7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Calyx_7B
A fine-tune of rmdhirr/Anthesis_7B, made for NSFW purposes.
Calyx_7B was trained on these datasets:
- Himitsui/Lewd-Assistant-v1
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
# Formatting/Preset
Alpaca works best, but Mistral provides good outputs as well.
---
<img src="URL width="100"/> | [
"# Calyx_7B\nA fine-tune of rmdhirr/Anthesis_7B, made for NSFW purposes.\n\nCalyx_7B was trained on these datasets:\n- Himitsui/Lewd-Assistant-v1\n- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED",
"# Formatting/Preset\nAlpaca works best, but Mistral provides good outputs as well.\n\n---\n\n<img src=\"URL width=\"100\"/>"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #fine-tune #roleplay #en #dataset-Himitsui/Lewd-Assistant-v1 #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED #base_model-rmdhirr/Anthesis_7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Calyx_7B\nA fine-tune of rmdhirr/Anthesis_7B, made for NSFW purposes.\n\nCalyx_7B was trained on these datasets:\n- Himitsui/Lewd-Assistant-v1\n- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED",
"# Formatting/Preset\nAlpaca works best, but Mistral provides good outputs as well.\n\n---\n\n<img src=\"URL width=\"100\"/>"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | dzungPaduahsgs/Mistral7Bcleaning_adamw_bnb_8bit_model_16bit | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-18T02:45:13+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
reinforcement-learning | transformers |
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="baek26//tmp/tmpqagxut0y/baek26/cnn_dailymail_6849_bart-dialogsum")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("baek26//tmp/tmpqagxut0y/baek26/cnn_dailymail_6849_bart-dialogsum")
model = AutoModelForCausalLMWithValueHead.from_pretrained("baek26//tmp/tmpqagxut0y/baek26/cnn_dailymail_6849_bart-dialogsum")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
| {"license": "apache-2.0", "tags": ["trl", "ppo", "transformers", "reinforcement-learning"]} | baek26/cnn_dailymail_6849_bart-dialogsum | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T02:46:03+00:00 | [] | [] | TAGS
#transformers #safetensors #bart #text2text-generation #trl #ppo #reinforcement-learning #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# TRL Model
This is a TRL language model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
You can then generate text as follows:
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
| [
"# TRL Model\n\nThis is a TRL language model that has been fine-tuned with reinforcement learning to\n guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.",
"## Usage\n\nTo use this model for inference, first install the TRL library:\n\n\n\nYou can then generate text as follows:\n\n\n\nIf you want to use the model for training or to obtain the outputs from the value head, load the model as follows:"
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #trl #ppo #reinforcement-learning #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# TRL Model\n\nThis is a TRL language model that has been fine-tuned with reinforcement learning to\n guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.",
"## Usage\n\nTo use this model for inference, first install the TRL library:\n\n\n\nYou can then generate text as follows:\n\n\n\nIf you want to use the model for training or to obtain the outputs from the value head, load the model as follows:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | dzungPaduahsgs/Mistral7Bcleaning_adamw_bnb_8bit_model_16bit_merged | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-18T02:50:21+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.1.dev0 | {"library_name": "peft", "base_model": "deepseek-ai/deepseek-coder-1.3b-instruct"} | CMU-AIR2/math-deepseek-lora-arith-all-simple | null | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-1.3b-instruct",
"region:us"
] | null | 2024-04-18T02:52:11+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #llama #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-1.3b-instruct #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.9.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.9.1.dev0"
] | [
"TAGS\n#peft #safetensors #llama #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-1.3b-instruct #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.9.1.dev0"
] |
null | null |
# DavidAU/SOLAR-10B-Nector-DPO-Jawade-Q8_0-GGUF
This model was converted to GGUF format from [`bhavinjawade/SOLAR-10B-Nector-DPO-Jawade`](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SOLAR-10B-Nector-DPO-Jawade-Q8_0-GGUF --model solar-10b-nector-dpo-jawade.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SOLAR-10B-Nector-DPO-Jawade-Q8_0-GGUF --model solar-10b-nector-dpo-jawade.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solar-10b-nector-dpo-jawade.Q8_0.gguf -n 128
```
| {"license": "mit", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs"]} | DavidAU/SOLAR-10B-Nector-DPO-Jawade-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:Intel/orca_dpo_pairs",
"license:mit",
"region:us"
] | null | 2024-04-18T02:55:14+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us
|
# DavidAU/SOLAR-10B-Nector-DPO-Jawade-Q8_0-GGUF
This model was converted to GGUF format from 'bhavinjawade/SOLAR-10B-Nector-DPO-Jawade' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SOLAR-10B-Nector-DPO-Jawade-Q8_0-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/SOLAR-10B-Nector-DPO-Jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us \n",
"# DavidAU/SOLAR-10B-Nector-DPO-Jawade-Q8_0-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/SOLAR-10B-Nector-DPO-Jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q8_0-GGUF
This model was converted to GGUF format from [`Eric111/SOLAR-10.7B-Instruct-v1.0-DPO`](https://huggingface.co/Eric111/SOLAR-10.7B-Instruct-v1.0-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/SOLAR-10.7B-Instruct-v1.0-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q8_0-GGUF --model solar-10.7b-instruct-v1.0-dpo.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q8_0-GGUF --model solar-10.7b-instruct-v1.0-dpo.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solar-10.7b-instruct-v1.0-dpo.Q8_0.gguf -n 128
```
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q8_0-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T02:56:24+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q8_0-GGUF
This model was converted to GGUF format from 'Eric111/SOLAR-10.7B-Instruct-v1.0-DPO' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q8_0-GGUF\nThis model was converted to GGUF format from 'Eric111/SOLAR-10.7B-Instruct-v1.0-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q8_0-GGUF\nThis model was converted to GGUF format from 'Eric111/SOLAR-10.7B-Instruct-v1.0-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers | # DZgas/GIGABATEMAN-7B AWQ
- Model creator: [DZgas](https://huggingface.co/DZgas)
- Original model: [GIGABATEMAN-7B](https://huggingface.co/DZgas/GIGABATEMAN-7B)
<img src="logo.jpeg">
## Model Summary
If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you
I recommend using <a href=https://huggingface.co/DZgas/GIGABATEMAN-7B-GGUF/tree/main>GGUF Variant</a> with <a href=https://github.com/LostRuins/koboldcpp/releases>koboldcpp</a> (do not use GPT4ALL)
This model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one.
With the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models.
| {"language": ["en"], "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "mistral", "llama", "nsfw", "roleplay", "merge"], "base_model": ["KatyTheCutie/LemonadeRP-4.5.3", "LakoMoor/Silicon-Alice-7B", "HuggingFaceH4/zephyr-7b-beta", "Endevor/InfinityRP-v1-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/GIGABATEMAN-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"llama",
"nsfw",
"roleplay",
"merge",
"en",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:LakoMoor/Silicon-Alice-7B",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:Endevor/InfinityRP-v1-7B",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:57:28+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #llama #nsfw #roleplay #merge #en #base_model-KatyTheCutie/LemonadeRP-4.5.3 #base_model-LakoMoor/Silicon-Alice-7B #base_model-HuggingFaceH4/zephyr-7b-beta #base_model-Endevor/InfinityRP-v1-7B #text-generation-inference #region-us
| # DZgas/GIGABATEMAN-7B AWQ
- Model creator: DZgas
- Original model: GIGABATEMAN-7B
<img src="URL">
## Model Summary
If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you
I recommend using <a href=URL Variant</a> with <a href=URL (do not use GPT4ALL)
This model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one.
With the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models.
| [
"# DZgas/GIGABATEMAN-7B AWQ\n\n- Model creator: DZgas\n- Original model: GIGABATEMAN-7B\n\n<img src=\"URL\">",
"## Model Summary\n\nIf you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you\n\nI recommend using <a href=URL Variant</a> with <a href=URL (do not use GPT4ALL)\n\nThis model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one.\n\nWith the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #llama #nsfw #roleplay #merge #en #base_model-KatyTheCutie/LemonadeRP-4.5.3 #base_model-LakoMoor/Silicon-Alice-7B #base_model-HuggingFaceH4/zephyr-7b-beta #base_model-Endevor/InfinityRP-v1-7B #text-generation-inference #region-us \n",
"# DZgas/GIGABATEMAN-7B AWQ\n\n- Model creator: DZgas\n- Original model: GIGABATEMAN-7B\n\n<img src=\"URL\">",
"## Model Summary\n\nIf you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you\n\nI recommend using <a href=URL Variant</a> with <a href=URL (do not use GPT4ALL)\n\nThis model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one.\n\nWith the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models."
] |
text-generation | transformers | # ozayezerceli/Selocan-2x7B-v1 AWQ
- Model creator: [ozayezerceli](https://huggingface.co/ozayezerceli)
- Original model: [Selocan-2x7B-v1](https://huggingface.co/Locutusque/Selocan-2x7B-v1)
## Model Summary
Selocan-2x7B-v1 is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [TURKCELL/Turkcell-LLM-7b-v1](https://huggingface.co/TURKCELL/Turkcell-LLM-7b-v1)
* [NovusResearch/Novus-7b-tr_v1](https://huggingface.co/NovusResearch/Novus-7b-tr_v1)
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "TURKCELL/Turkcell-LLM-7b-v1", "NovusResearch/Novus-7b-tr_v1"], "base_model": ["TURKCELL/Turkcell-LLM-7b-v1", "NovusResearch/Novus-7b-tr_v1"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Selocan-2x7B-v1-AWQ | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"TURKCELL/Turkcell-LLM-7b-v1",
"NovusResearch/Novus-7b-tr_v1",
"conversational",
"base_model:TURKCELL/Turkcell-LLM-7b-v1",
"base_model:NovusResearch/Novus-7b-tr_v1",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:57:47+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #moe #frankenmoe #merge #mergekit #lazymergekit #TURKCELL/Turkcell-LLM-7b-v1 #NovusResearch/Novus-7b-tr_v1 #conversational #base_model-TURKCELL/Turkcell-LLM-7b-v1 #base_model-NovusResearch/Novus-7b-tr_v1 #license-apache-2.0 #text-generation-inference #region-us
| # ozayezerceli/Selocan-2x7B-v1 AWQ
- Model creator: ozayezerceli
- Original model: Selocan-2x7B-v1
## Model Summary
Selocan-2x7B-v1 is a Mixture of Experts (MoE) made with the following models using LazyMergekit:
* TURKCELL/Turkcell-LLM-7b-v1
* NovusResearch/Novus-7b-tr_v1
| [
"# ozayezerceli/Selocan-2x7B-v1 AWQ\n\n- Model creator: ozayezerceli\n- Original model: Selocan-2x7B-v1",
"## Model Summary\n\nSelocan-2x7B-v1 is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* TURKCELL/Turkcell-LLM-7b-v1\n* NovusResearch/Novus-7b-tr_v1"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #moe #frankenmoe #merge #mergekit #lazymergekit #TURKCELL/Turkcell-LLM-7b-v1 #NovusResearch/Novus-7b-tr_v1 #conversational #base_model-TURKCELL/Turkcell-LLM-7b-v1 #base_model-NovusResearch/Novus-7b-tr_v1 #license-apache-2.0 #text-generation-inference #region-us \n",
"# ozayezerceli/Selocan-2x7B-v1 AWQ\n\n- Model creator: ozayezerceli\n- Original model: Selocan-2x7B-v1",
"## Model Summary\n\nSelocan-2x7B-v1 is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* TURKCELL/Turkcell-LLM-7b-v1\n* NovusResearch/Novus-7b-tr_v1"
] |
text-generation | transformers | # Novin-AI/Rava-2x7B-v0.1 AWQ
- Model creator: [Novin-AI](https://huggingface.co/Novin-AI)
- Original model: [Rava-2x7B-v0.1](https://huggingface.co/Novin-AI/Rava-2x7B-v0.1)
## Model Summary
The author has not provided a model card
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Rava-2x7B-v0.1-AWQ | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:58:08+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us
| # Novin-AI/Rava-2x7B-v0.1 AWQ
- Model creator: Novin-AI
- Original model: Rava-2x7B-v0.1
## Model Summary
The author has not provided a model card
| [
"# Novin-AI/Rava-2x7B-v0.1 AWQ\n\n- Model creator: Novin-AI\n- Original model: Rava-2x7B-v0.1",
"## Model Summary\n\nThe author has not provided a model card"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us \n",
"# Novin-AI/Rava-2x7B-v0.1 AWQ\n\n- Model creator: Novin-AI\n- Original model: Rava-2x7B-v0.1",
"## Model Summary\n\nThe author has not provided a model card"
] |
text-generation | transformers |
# DavidAU/SOLAR-10.7B-Instruct-Forest-DPO-v1-Q8_0-GGUF
This model was converted to GGUF format from [`abhishekchohan/SOLAR-10.7B-Instruct-Forest-DPO-v1`](https://huggingface.co/abhishekchohan/SOLAR-10.7B-Instruct-Forest-DPO-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/abhishekchohan/SOLAR-10.7B-Instruct-Forest-DPO-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SOLAR-10.7B-Instruct-Forest-DPO-v1-Q8_0-GGUF --model solar-10.7b-instruct-forest-dpo-v1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SOLAR-10.7B-Instruct-Forest-DPO-v1-Q8_0-GGUF --model solar-10.7b-instruct-forest-dpo-v1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solar-10.7b-instruct-forest-dpo-v1.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "nvidia/HelpSteer", "jondurbin/truthy-dpo-v0.1"], "pipeline_tag": "text-generation"} | DavidAU/SOLAR-10.7B-Instruct-Forest-DPO-v1-Q8_0-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:nvidia/HelpSteer",
"dataset:jondurbin/truthy-dpo-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T02:58:11+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-Intel/orca_dpo_pairs #dataset-nvidia/HelpSteer #dataset-jondurbin/truthy-dpo-v0.1 #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/SOLAR-10.7B-Instruct-Forest-DPO-v1-Q8_0-GGUF
This model was converted to GGUF format from 'abhishekchohan/SOLAR-10.7B-Instruct-Forest-DPO-v1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SOLAR-10.7B-Instruct-Forest-DPO-v1-Q8_0-GGUF\nThis model was converted to GGUF format from 'abhishekchohan/SOLAR-10.7B-Instruct-Forest-DPO-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-Intel/orca_dpo_pairs #dataset-nvidia/HelpSteer #dataset-jondurbin/truthy-dpo-v0.1 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/SOLAR-10.7B-Instruct-Forest-DPO-v1-Q8_0-GGUF\nThis model was converted to GGUF format from 'abhishekchohan/SOLAR-10.7B-Instruct-Forest-DPO-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-books-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Accuracy: 0.5
- F1: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "model-books-sentiment-analysis", "results": []}]} | ginnadoreushn/model-books-sentiment-analysis | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T02:59:33+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# model-books-sentiment-analysis
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Accuracy: 0.5
- F1: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# model-books-sentiment-analysis\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6932\n- Accuracy: 0.5\n- F1: 0.6667",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.001\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# model-books-sentiment-analysis\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6932\n- Accuracy: 0.5\n- F1: 0.6667",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.001\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | mohamedhachemi/mohazz_V3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-18T02:59:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7binstruct_summarize
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7596 | 0.2193 | 25 | 1.5634 |
| 1.573 | 0.4386 | 50 | 1.4987 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral7binstruct_summarize", "results": []}]} | acrobatlm/mistral7binstruct_summarize | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T03:02:44+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
| mistral7binstruct\_summarize
============================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4987
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: constant
* lr\_scheduler\_warmup\_steps: 0.03
* training\_steps: 50
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | quynguyen1704/deepseek-math-7b-rl-zaloai-v2 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T03:02:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | sample-factory |
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r rexanwong/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
| {"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "10.89 +/- 4.94", "name": "mean_reward", "verified": false}]}]}]} | rexanwong/rl_course_vizdoom_health_gathering_supreme | null | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-18T03:03:15+00:00 | [] | [] | TAGS
#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
A(n) APPO model trained on the doom_health_gathering_supreme environment.
This model was trained using Sample-Factory 2.0: URL
Documentation for how to use Sample-Factory can be found at URL
## Downloading the model
After installing Sample-Factory, download the model with:
## Using the model
To run the model after download, use the 'enjoy' script corresponding to this environment:
You can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.
See URL for more details
## Training with this model
To continue training with this model, use the 'train' script corresponding to this environment:
Note, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at.
| [
"## Downloading the model\n\nAfter installing Sample-Factory, download the model with:",
"## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details",
"## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at."
] | [
"TAGS\n#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"## Downloading the model\n\nAfter installing Sample-Factory, download the model with:",
"## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details",
"## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.