Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Terjman-Large
This is a translation model from English to Moroccan darija. It is a finetuned version of "Helsinki-NLP/opus-mt-tc-big-en-ar" on the "BounharAbdelaziz/English-to-Moroccan-Darija" dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- warmup_ratio: 0.03
- gradient_accumulation_steps: 2
- total_train_batch_size: 182
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"language": ["ar", "en"], "tags": ["generated_from_trainer"], "datasets": ["BounharAbdelaziz/English-to-Moroccan-Darija"], "metrics": ["bleu"], "model-index": [{"name": "results", "results": []}]} | atlasia/Terjman-Large | null | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"ar",
"en",
"dataset:BounharAbdelaziz/English-to-Moroccan-Darija",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"has_space"
]
| null | 2024-04-26T20:33:18+00:00 |
null | null | {} | daniel-dona/icefall-asr-commonvoice-zipformer-streaming-it | null | [
"region:us"
]
| null | 2024-04-26T20:33:41+00:00 |
|
null | transformers | {} | baseten/llama3-70b-fp8-fp8kv-bs64-i256-o128 | null | [
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:34:21+00:00 |
|
null | null | {"license": "openrail"} | e2h/tzuyubyleelolifix | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-26T20:34:23+00:00 |
|
null | null | {"license": "mit"} | litonglinux/whisper-cpp-wasm | null | [
"license:mit",
"region:us"
]
| null | 2024-04-26T20:34:51+00:00 |
|
text-generation | transformers | {} | jimmy58/my_awesome_eli5_clm-model | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T20:35:15+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# danish-bert-botxo-fb-housing-posts
This model is a fine-tuned version of [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1659
- Accuracy: 0.9519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.372 | 1.0 | 55 | 0.2514 | 0.9251 |
| 0.171 | 2.0 | 110 | 0.1881 | 0.9305 |
| 0.2315 | 3.0 | 165 | 0.1854 | 0.9465 |
| 0.1284 | 4.0 | 220 | 0.1745 | 0.9465 |
| 0.0353 | 5.0 | 275 | 0.1659 | 0.9519 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "Maltehb/danish-bert-botxo", "model-index": [{"name": "danish-bert-botxo-fb-housing-posts", "results": []}]} | hoaj/danish-bert-botxo-fb-housing-posts | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Maltehb/danish-bert-botxo",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:36:20+00:00 |
null | null | {} | squaadinc/chuck222222 | null | [
"region:us"
]
| null | 2024-04-26T20:36:52+00:00 |
|
null | null | {} | Alialakbarli/model_bart | null | [
"region:us"
]
| null | 2024-04-26T20:38:32+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5959
- F1 Score: 0.6921
- Accuracy: 0.6927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6344 | 1.04 | 200 | 0.6225 | 0.6144 | 0.6575 |
| 0.6043 | 2.08 | 400 | 0.6163 | 0.6645 | 0.6631 |
| 0.5927 | 3.12 | 600 | 0.5941 | 0.6726 | 0.6839 |
| 0.5884 | 4.17 | 800 | 0.5944 | 0.6772 | 0.6787 |
| 0.5821 | 5.21 | 1000 | 0.5999 | 0.6648 | 0.6771 |
| 0.5781 | 6.25 | 1200 | 0.5889 | 0.6766 | 0.6830 |
| 0.57 | 7.29 | 1400 | 0.5864 | 0.6840 | 0.6852 |
| 0.5676 | 8.33 | 1600 | 0.5910 | 0.6843 | 0.6865 |
| 0.5642 | 9.38 | 1800 | 0.5859 | 0.6850 | 0.6849 |
| 0.5589 | 10.42 | 2000 | 0.5879 | 0.6746 | 0.6856 |
| 0.5594 | 11.46 | 2200 | 0.5878 | 0.6760 | 0.6888 |
| 0.5474 | 12.5 | 2400 | 0.5921 | 0.6837 | 0.6882 |
| 0.547 | 13.54 | 2600 | 0.5910 | 0.6845 | 0.6875 |
| 0.546 | 14.58 | 2800 | 0.5962 | 0.6778 | 0.6888 |
| 0.5445 | 15.62 | 3000 | 0.5916 | 0.6875 | 0.6908 |
| 0.5372 | 16.67 | 3200 | 0.5967 | 0.6834 | 0.6869 |
| 0.5345 | 17.71 | 3400 | 0.5937 | 0.6843 | 0.6869 |
| 0.5332 | 18.75 | 3600 | 0.5958 | 0.6911 | 0.6973 |
| 0.5322 | 19.79 | 3800 | 0.5955 | 0.6925 | 0.6931 |
| 0.5259 | 20.83 | 4000 | 0.5892 | 0.6922 | 0.6973 |
| 0.5225 | 21.88 | 4200 | 0.6037 | 0.6956 | 0.6973 |
| 0.5192 | 22.92 | 4400 | 0.6015 | 0.6874 | 0.6885 |
| 0.5181 | 23.96 | 4600 | 0.6004 | 0.6907 | 0.6905 |
| 0.5149 | 25.0 | 4800 | 0.6000 | 0.6964 | 0.6966 |
| 0.5119 | 26.04 | 5000 | 0.6099 | 0.6871 | 0.6891 |
| 0.5081 | 27.08 | 5200 | 0.6031 | 0.6916 | 0.6924 |
| 0.5067 | 28.12 | 5400 | 0.6027 | 0.6900 | 0.6937 |
| 0.5007 | 29.17 | 5600 | 0.6006 | 0.6901 | 0.6937 |
| 0.4993 | 30.21 | 5800 | 0.6142 | 0.6915 | 0.6934 |
| 0.4967 | 31.25 | 6000 | 0.6157 | 0.6846 | 0.6856 |
| 0.4958 | 32.29 | 6200 | 0.6117 | 0.6835 | 0.6872 |
| 0.4951 | 33.33 | 6400 | 0.6141 | 0.6894 | 0.6891 |
| 0.489 | 34.38 | 6600 | 0.6234 | 0.6854 | 0.6905 |
| 0.4901 | 35.42 | 6800 | 0.6234 | 0.6812 | 0.6833 |
| 0.4895 | 36.46 | 7000 | 0.6140 | 0.6878 | 0.6888 |
| 0.4838 | 37.5 | 7200 | 0.6209 | 0.6917 | 0.6927 |
| 0.4808 | 38.54 | 7400 | 0.6218 | 0.6866 | 0.6895 |
| 0.4883 | 39.58 | 7600 | 0.6064 | 0.6914 | 0.6970 |
| 0.4815 | 40.62 | 7800 | 0.6235 | 0.6883 | 0.6878 |
| 0.4764 | 41.67 | 8000 | 0.6188 | 0.6831 | 0.6849 |
| 0.473 | 42.71 | 8200 | 0.6329 | 0.6819 | 0.6833 |
| 0.4764 | 43.75 | 8400 | 0.6181 | 0.6878 | 0.6901 |
| 0.473 | 44.79 | 8600 | 0.6212 | 0.6869 | 0.6885 |
| 0.4728 | 45.83 | 8800 | 0.6212 | 0.6860 | 0.6882 |
| 0.4713 | 46.88 | 9000 | 0.6265 | 0.6852 | 0.6856 |
| 0.472 | 47.92 | 9200 | 0.6250 | 0.6842 | 0.6852 |
| 0.4705 | 48.96 | 9400 | 0.6273 | 0.6846 | 0.6875 |
| 0.4695 | 50.0 | 9600 | 0.6263 | 0.6861 | 0.6865 |
| 0.4686 | 51.04 | 9800 | 0.6281 | 0.6839 | 0.6849 |
| 0.4644 | 52.08 | 10000 | 0.6287 | 0.6842 | 0.6849 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T20:40:14+00:00 |
text-generation | transformers | {"license": "apache-2.0"} | mucai/vip-llava-llama-3-8b | null | [
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:40:43+00:00 |
|
text-generation | transformers |
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "mit", "tags": ["exbert"]} | ChatTechAI/Babbage-V1 | null | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T20:40:45+00:00 |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 120
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "output", "results": []}]} | soft-boy/decision_transformer | null | [
"transformers",
"tensorboard",
"safetensors",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:41:20+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/maywell/miqu-evil-dpo
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["not-for-all-audiences"], "base_model": "maywell/miqu-evil-dpo", "license_link": "LICENSE", "license_name": "miqu-license", "quantized_by": "mradermacher"} | mradermacher/miqu-evil-dpo-i1-GGUF | null | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:maywell/miqu-evil-dpo",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:42:21+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold2
This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1478
- Accuracy: 0.6735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2127 | 1.0 | 923 | 1.1541 | 0.6049 |
| 0.9204 | 2.0 | 1846 | 1.0192 | 0.6551 |
| 0.7902 | 3.0 | 2769 | 0.9565 | 0.6649 |
| 0.7847 | 4.0 | 3692 | 0.9757 | 0.6743 |
| 0.6816 | 5.0 | 4615 | 0.9792 | 0.6692 |
| 0.4595 | 6.0 | 5538 | 1.0390 | 0.6678 |
| 0.425 | 7.0 | 6461 | 1.0878 | 0.6678 |
| 0.4186 | 8.0 | 7384 | 1.1180 | 0.6727 |
| 0.3048 | 9.0 | 8307 | 1.1393 | 0.6708 |
| 0.2135 | 10.0 | 9230 | 1.1478 | 0.6735 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-large-patch4-window7-224", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold2", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6735135135135135, "name": "Accuracy"}]}]}]} | onizukal/Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold2 | null | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-large-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:42:35+00:00 |
null | null | {} | Swabhinav/llama_3_test | null | [
"region:us"
]
| null | 2024-04-26T20:42:37+00:00 |
|
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [aaditya/OpenBioLLM-Llama3-8B](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B)
* [mattshumer/Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: aaditya/OpenBioLLM-Llama3-8B
layer_range: [0, 32]
- model: mattshumer/Llama-3-8B-16K
layer_range: [0, 32]
merge_method: slerp
base_model: aaditya/OpenBioLLM-Llama3-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["aaditya/OpenBioLLM-Llama3-8B", "mattshumer/Llama-3-8B-16K"]} | o2satz/L3_med16 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:aaditya/OpenBioLLM-Llama3-8B",
"base_model:mattshumer/Llama-3-8B-16K",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T20:43:19+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | woransa/OrpoLlama-3-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T20:43:52+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5838
- F1 Score: 0.6957
- Accuracy: 0.6970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6296 | 1.04 | 200 | 0.6099 | 0.6444 | 0.6719 |
| 0.5961 | 2.08 | 400 | 0.6079 | 0.6722 | 0.6712 |
| 0.5838 | 3.12 | 600 | 0.5955 | 0.6792 | 0.6852 |
| 0.5789 | 4.17 | 800 | 0.5935 | 0.6831 | 0.6849 |
| 0.5677 | 5.21 | 1000 | 0.5989 | 0.6704 | 0.6794 |
| 0.5599 | 6.25 | 1200 | 0.5914 | 0.6868 | 0.6885 |
| 0.5498 | 7.29 | 1400 | 0.5906 | 0.6822 | 0.6852 |
| 0.5397 | 8.33 | 1600 | 0.5999 | 0.6849 | 0.6895 |
| 0.5339 | 9.38 | 1800 | 0.6028 | 0.6835 | 0.6826 |
| 0.5233 | 10.42 | 2000 | 0.6158 | 0.6733 | 0.6839 |
| 0.5159 | 11.46 | 2200 | 0.6140 | 0.6795 | 0.6888 |
| 0.4976 | 12.5 | 2400 | 0.6165 | 0.6813 | 0.6856 |
| 0.4901 | 13.54 | 2600 | 0.6297 | 0.6852 | 0.6869 |
| 0.483 | 14.58 | 2800 | 0.6441 | 0.6727 | 0.6794 |
| 0.474 | 15.62 | 3000 | 0.6274 | 0.6781 | 0.6823 |
| 0.4618 | 16.67 | 3200 | 0.6527 | 0.6793 | 0.6807 |
| 0.4507 | 17.71 | 3400 | 0.6513 | 0.6815 | 0.6826 |
| 0.4411 | 18.75 | 3600 | 0.6755 | 0.6822 | 0.6849 |
| 0.4325 | 19.79 | 3800 | 0.6762 | 0.6808 | 0.6817 |
| 0.4178 | 20.83 | 4000 | 0.6663 | 0.6830 | 0.6885 |
| 0.4111 | 21.88 | 4200 | 0.7240 | 0.6796 | 0.6810 |
| 0.3942 | 22.92 | 4400 | 0.6946 | 0.6762 | 0.6768 |
| 0.3911 | 23.96 | 4600 | 0.7232 | 0.6850 | 0.6839 |
| 0.3827 | 25.0 | 4800 | 0.7319 | 0.6730 | 0.6722 |
| 0.3718 | 26.04 | 5000 | 0.7550 | 0.6841 | 0.6869 |
| 0.3595 | 27.08 | 5200 | 0.7298 | 0.6854 | 0.6846 |
| 0.3549 | 28.12 | 5400 | 0.7805 | 0.6755 | 0.6787 |
| 0.3477 | 29.17 | 5600 | 0.7857 | 0.6724 | 0.6761 |
| 0.3431 | 30.21 | 5800 | 0.7814 | 0.6853 | 0.6862 |
| 0.3351 | 31.25 | 6000 | 0.8192 | 0.6722 | 0.6751 |
| 0.3288 | 32.29 | 6200 | 0.8572 | 0.6694 | 0.6729 |
| 0.3269 | 33.33 | 6400 | 0.8028 | 0.6777 | 0.6768 |
| 0.3173 | 34.38 | 6600 | 0.8300 | 0.6711 | 0.6732 |
| 0.3119 | 35.42 | 6800 | 0.8257 | 0.6722 | 0.6732 |
| 0.3029 | 36.46 | 7000 | 0.8348 | 0.6721 | 0.6725 |
| 0.3015 | 37.5 | 7200 | 0.8823 | 0.6728 | 0.6738 |
| 0.2983 | 38.54 | 7400 | 0.8730 | 0.6689 | 0.6716 |
| 0.2983 | 39.58 | 7600 | 0.8422 | 0.6660 | 0.6706 |
| 0.2871 | 40.62 | 7800 | 0.8960 | 0.6723 | 0.6732 |
| 0.2812 | 41.67 | 8000 | 0.9115 | 0.6643 | 0.6641 |
| 0.2769 | 42.71 | 8200 | 0.9336 | 0.6722 | 0.6745 |
| 0.2743 | 43.75 | 8400 | 0.9085 | 0.6730 | 0.6738 |
| 0.2707 | 44.79 | 8600 | 0.9092 | 0.6707 | 0.6716 |
| 0.2732 | 45.83 | 8800 | 0.9188 | 0.6715 | 0.6735 |
| 0.2715 | 46.88 | 9000 | 0.9204 | 0.6747 | 0.6755 |
| 0.2623 | 47.92 | 9200 | 0.9272 | 0.6707 | 0.6712 |
| 0.2654 | 48.96 | 9400 | 0.9270 | 0.6700 | 0.6716 |
| 0.2598 | 50.0 | 9600 | 0.9354 | 0.6737 | 0.6738 |
| 0.2611 | 51.04 | 9800 | 0.9368 | 0.6737 | 0.6742 |
| 0.2565 | 52.08 | 10000 | 0.9348 | 0.6711 | 0.6716 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T20:44:24+00:00 |
null | null | {} | Knobi3/Evomerge0 | null | [
"region:us"
]
| null | 2024-04-26T20:44:34+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4618
- F1 Score: 0.8001
- Accuracy: 0.7996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5729 | 1.15 | 200 | 0.5451 | 0.7280 | 0.7290 |
| 0.5306 | 2.3 | 400 | 0.5608 | 0.7185 | 0.7200 |
| 0.512 | 3.45 | 600 | 0.5266 | 0.7362 | 0.7359 |
| 0.5029 | 4.6 | 800 | 0.5154 | 0.7436 | 0.7431 |
| 0.4963 | 5.75 | 1000 | 0.5082 | 0.7440 | 0.7445 |
| 0.4907 | 6.9 | 1200 | 0.5116 | 0.7515 | 0.7510 |
| 0.4837 | 8.05 | 1400 | 0.5103 | 0.7524 | 0.7521 |
| 0.48 | 9.2 | 1600 | 0.5221 | 0.7459 | 0.7463 |
| 0.4729 | 10.34 | 1800 | 0.5101 | 0.7541 | 0.7539 |
| 0.4742 | 11.49 | 2000 | 0.5007 | 0.7596 | 0.7596 |
| 0.4669 | 12.64 | 2200 | 0.5137 | 0.7549 | 0.7546 |
| 0.4675 | 13.79 | 2400 | 0.4950 | 0.7656 | 0.7654 |
| 0.4648 | 14.94 | 2600 | 0.4951 | 0.7651 | 0.7647 |
| 0.4611 | 16.09 | 2800 | 0.5000 | 0.7629 | 0.7625 |
| 0.4573 | 17.24 | 3000 | 0.5075 | 0.7616 | 0.7611 |
| 0.4572 | 18.39 | 3200 | 0.5053 | 0.7625 | 0.7621 |
| 0.4581 | 19.54 | 3400 | 0.4920 | 0.7652 | 0.7647 |
| 0.4508 | 20.69 | 3600 | 0.4946 | 0.7632 | 0.7632 |
| 0.4475 | 21.84 | 3800 | 0.4949 | 0.7641 | 0.7639 |
| 0.4479 | 22.99 | 4000 | 0.4966 | 0.7630 | 0.7629 |
| 0.4468 | 24.14 | 4200 | 0.4915 | 0.7658 | 0.7657 |
| 0.4375 | 25.29 | 4400 | 0.5056 | 0.7644 | 0.7639 |
| 0.4442 | 26.44 | 4600 | 0.4948 | 0.7619 | 0.7614 |
| 0.4416 | 27.59 | 4800 | 0.5015 | 0.7672 | 0.7668 |
| 0.4381 | 28.74 | 5000 | 0.4962 | 0.7631 | 0.7629 |
| 0.4409 | 29.89 | 5200 | 0.4953 | 0.7659 | 0.7654 |
| 0.4345 | 31.03 | 5400 | 0.4977 | 0.7658 | 0.7654 |
| 0.4345 | 32.18 | 5600 | 0.4902 | 0.7655 | 0.7654 |
| 0.4294 | 33.33 | 5800 | 0.5008 | 0.7656 | 0.7654 |
| 0.4378 | 34.48 | 6000 | 0.4893 | 0.7666 | 0.7661 |
| 0.4267 | 35.63 | 6200 | 0.4947 | 0.7699 | 0.7697 |
| 0.434 | 36.78 | 6400 | 0.4922 | 0.7652 | 0.7647 |
| 0.4283 | 37.93 | 6600 | 0.5046 | 0.7654 | 0.7650 |
| 0.4271 | 39.08 | 6800 | 0.4893 | 0.7691 | 0.7686 |
| 0.4252 | 40.23 | 7000 | 0.4951 | 0.7623 | 0.7618 |
| 0.4233 | 41.38 | 7200 | 0.4940 | 0.7655 | 0.7650 |
| 0.425 | 42.53 | 7400 | 0.4938 | 0.7687 | 0.7683 |
| 0.426 | 43.68 | 7600 | 0.4903 | 0.7708 | 0.7704 |
| 0.4194 | 44.83 | 7800 | 0.4950 | 0.7648 | 0.7643 |
| 0.424 | 45.98 | 8000 | 0.4897 | 0.7694 | 0.7690 |
| 0.4236 | 47.13 | 8200 | 0.4926 | 0.7670 | 0.7665 |
| 0.4186 | 48.28 | 8400 | 0.4926 | 0.7669 | 0.7665 |
| 0.4177 | 49.43 | 8600 | 0.4937 | 0.7662 | 0.7657 |
| 0.4183 | 50.57 | 8800 | 0.4941 | 0.7669 | 0.7665 |
| 0.4197 | 51.72 | 9000 | 0.4950 | 0.7659 | 0.7654 |
| 0.4179 | 52.87 | 9200 | 0.4951 | 0.7655 | 0.7650 |
| 0.4188 | 54.02 | 9400 | 0.4934 | 0.7673 | 0.7668 |
| 0.4183 | 55.17 | 9600 | 0.4939 | 0.7673 | 0.7668 |
| 0.4171 | 56.32 | 9800 | 0.4922 | 0.7687 | 0.7683 |
| 0.4187 | 57.47 | 10000 | 0.4928 | 0.7684 | 0.7679 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T20:44:38+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UrduSum5k_multi2
This model is a fine-tuned version of [ahmed0189/mT5-Arabic-text-summarization](https://huggingface.co/ahmed0189/mT5-Arabic-text-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2260
- Rouge1: 0.14
- Rouge2: 0.0333
- Rougel: 0.14
- Rougelsum: 0.14
- Gen Len: 34.6705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.8803 | 1.0 | 1500 | 3.2260 | 0.14 | 0.0333 | 0.14 | 0.14 | 34.6705 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "UrduSum5k_multi2", "results": []}]} | Alefiah/UrduSum5k_multi2 | null | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T20:45:02+00:00 |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
Follow to eval the agent locally:
```python
repo_id = "Laz4rz/hf-LunarLander-1-ppo" # The repo_id
filename = "ppo-LunarLander-v2.zip" # The model filename.zip
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint)
eval_env = Monitor(gym.make("LunarLander-v2"))
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "261.43 +/- 17.17", "name": "mean_reward", "verified": false}]}]}]} | Laz4rz/hf-LunarLander-1-ppo | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| null | 2024-04-26T20:45:27+00:00 |
text-to-audio | transformers | {} | zizzimars/speecht5_finetuned_kaztts2_2attempt | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:47:10+00:00 |
|
null | transformers | {"license": "other", "license_name": "license", "license_link": "LICENSE"} | jdhadljasnajd/chat-model | null | [
"transformers",
"llama",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T20:47:53+00:00 |
|
text-generation | null |
# Phi-3-mini-4k-instruct - llamafile
This repository contains executable weights (which we call
[llamafiles](https://github.com/Mozilla-Ocho/llamafile)) that run on
Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64.
- Model creator: [Microsoft](https://huggingface.co/microsoft)
- Original model: [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
## Quickstart
Assuming your system has at least 32GB of RAM, you can try running the
following command which download, concatenate, and execute the model.
```
wget https://huggingface.co/jartine/Phi-3-mini-4k-instruct-llamafile/resolve/main/Phi-3-mini-4k-instruct.F16.llamafile
chmod +x Phi-3-mini-4k-instruct.F16.llamafile
./Phi-3-mini-4k-instruct.F16.llamafile --help # view manual
./Phi-3-mini-4k-instruct.F16.llamafile # launch web gui + oai api
./Phi-3-mini-4k-instruct.F16.llamafile -p ... # cli interface (scriptable)
```
Alternatively, you may download an official `llamafile` executable from
Mozilla Ocho on GitHub, in which case you can use the Mixtral llamafiles
as a simple weights data file.
```
llamafile -m ./Phi-3-mini-4k-instruct.F16.llamafile ...
```
For further information, please see the [llamafile
README](https://github.com/mozilla-ocho/llamafile/).
Having **trouble?** See the ["Gotchas"
section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas)
of the README.
## Prompting
Prompt template:
```
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
Command template:
```
./Phi-3-mini-4k-instruct.F16.llamafile -e -p "<|user|>\n{{prompt}}<|end|>\n<|assistant|>"
```
## About llamafile
llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023.
It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
binaries that run on the stock installs of six OSes for both ARM64 and
AMD64.
In addition to being executables, llamafiles are also zip archives. Each
llamafile contains a GGUF file, which you can extract using the `unzip`
command. If you want to change or add files to your llamafiles, then the
`zipalign` command (distributed on the llamafile github) should be used
instead of the traditional `zip` command.
## Licensing (Phi-3 Specific)
The Phi-3 llamafiles are licensed Apache 2.0 because some of the
software that went into creating these llamafiles uses that as its
license. The Phi-3 weights themselves were published by Microsoft under
the even more permissive MIT license. You can use the `unzip` command to
extract the MIT-licensed GGUF file from each llamafile, which contains
only the Microsoft Phi-3 weights.
For further details on the complete picture, read our `LICENSE` file,
since it documents the copyright notice of every transitive dependency.
## About Quantization Formats (General Advice)
Your choice of quantization format depends on three things:
1. Will it fit in RAM or VRAM?
2. Is your use case reading (e.g. summarization) or writing (e.g. chatbot)?
3. llamafiles bigger than 4.30 GB are hard to run on Windows (see [gotchas](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas))
Good quants for writing (prediction speed) are Q5\_K\_M, and Q4\_0. Text
generation is bounded by memory speed, so smaller quants help, but they
cause the LLM to hallucinate more. However that doesn't mean they can't
think correctly. A highly degraded quant like `Q2_K` may not make a
great encyclopedia, but it's still capable of logical reasoning and
the emergent capabilities LLMs exhibit.
Good quants for reading (evaluation speed) are BF16, F16, Q8\_0, and
Q4\_0 (ordered from fastest to slowest). Prompt evaluation is bounded by
flop count, which means perf can be improved through software
engineering alone, e.g. BLAS algorithms, in which case quantization
starts hurting more than it helps, since it competes for CPU resources
and makes it harder for the compiler to parallelize instructions. You
want to ideally use the simplest smallest floating point format that's
natively implemented by your hardware. In most cases, that's BF16 or
FP16. However, llamafile is able to still offer respectable tinyBLAS
speedups for llama.cpp's simplest quants: Q8\_0 and Q4\_0.
--
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Tokenizer
Phi-3 Mini-4K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
*Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.*
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {"language": ["en"], "license": "apache-2.0", "tags": ["llamafile"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "pipeline_tag": "text-generation", "prompt_template": "<|system|>\nYou are a helpful AI assistant.<|end|>\n<|user|>\n{{prompt}}<|end|>\n<|assistant|>"} | jartine/Phi-3-mini-4k-instruct-llamafile | null | [
"llamafile",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-26T20:47:56+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6908 | 1.0 | 1273 | 3.5831 |
| 3.5749 | 2.0 | 2546 | 3.5787 |
| 3.5283 | 3.0 | 3819 | 3.5790 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "my_awesome_eli5_clm-model", "results": []}]} | GauravR12060102/my_awesome_eli5_clm-model | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T20:48:38+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["biology", "medical"], "datasets": ["argilla/dpo-mix-7k", "nvidia/HelpSteer", "jondurbin/airoboros-3.2", "hkust-nlp/deita-10k-v0", "LDJnr/Capybara", "HPAI-BSC/CareQA", "GBaker/MedQA-USMLE-4-options", "lukaemon/mmlu", "bigbio/pubmed_qa", "openlifescienceai/medmcqa", "bigbio/med_qa"], "base_model": "HPAI-BSC/Llama3-Aloe-8B-Alpha", "quantized_by": "mradermacher"} | mradermacher/Llama3-Aloe-8B-Alpha-GGUF | null | [
"transformers",
"gguf",
"biology",
"medical",
"en",
"dataset:argilla/dpo-mix-7k",
"dataset:nvidia/HelpSteer",
"dataset:jondurbin/airoboros-3.2",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:LDJnr/Capybara",
"dataset:HPAI-BSC/CareQA",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:lukaemon/mmlu",
"dataset:bigbio/pubmed_qa",
"dataset:openlifescienceai/medmcqa",
"dataset:bigbio/med_qa",
"base_model:HPAI-BSC/Llama3-Aloe-8B-Alpha",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:49:47+00:00 |
token-classification | transformers | {} | Besrour/LILT | null | [
"transformers",
"safetensors",
"lilt",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:50:00+00:00 |
|
text-generation | transformers | [Phi 3 128K instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) weights in the Llama format.
All credit goes to @vonjack for their useful [conversion script](https://huggingface.co/vonjack/Phi-3-mini-4k-instruct-LLaMAfied) as well as index files which were used to make this.
Check out their version for the [4k model](https://huggingface.co/vonjack/Phi-3-mini-4k-instruct-LLaMAfied)! | {"language": ["en"], "license": "mit"} | mrfakename/llamaphi-3-128k-instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T20:52:04+00:00 |
null | null | {} | BarathMosby/mistral-finetuned-alpaca | null | [
"tensorboard",
"safetensors",
"region:us"
]
| null | 2024-04-26T20:52:50+00:00 |
|
null | null |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "widget": [{"example_title": "OpenBioLLM-70B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-70B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-70B-GGUF | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"region:us"
]
| null | 2024-04-26T20:56:57+00:00 |
text-generation | transformers | {} | mucai/vip-llava-phi-3-mini-3.8B | null | [
"transformers",
"safetensors",
"llava_phi3",
"text-generation",
"conversational",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T20:58:19+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | liquid9212/1rtdb86 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:00:13+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** WeOneGuy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | WeOneGuy/mistral-7b-alpaca | null | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:00:14+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/cxo3sk6 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:00:17+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/1pem3u5 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:00:55+00:00 |
text-generation | transformers | {} | GauravR12060102/my_awesome_eli5_clm-model_gpt2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:01:30+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_iter_1", "results": []}]} | ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:01:45+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4860
- F1 Score: 0.7868
- Accuracy: 0.7863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5557 | 1.15 | 200 | 0.5435 | 0.7278 | 0.7272 |
| 0.5083 | 2.3 | 400 | 0.5559 | 0.7251 | 0.7280 |
| 0.4847 | 3.45 | 600 | 0.5117 | 0.7588 | 0.7585 |
| 0.4722 | 4.6 | 800 | 0.4942 | 0.7637 | 0.7632 |
| 0.4661 | 5.75 | 1000 | 0.4936 | 0.7687 | 0.7683 |
| 0.4575 | 6.9 | 1200 | 0.4923 | 0.7702 | 0.7697 |
| 0.4504 | 8.05 | 1400 | 0.5031 | 0.7624 | 0.7621 |
| 0.442 | 9.2 | 1600 | 0.4930 | 0.7698 | 0.7697 |
| 0.4356 | 10.34 | 1800 | 0.4876 | 0.7700 | 0.7697 |
| 0.434 | 11.49 | 2000 | 0.4839 | 0.7726 | 0.7722 |
| 0.4251 | 12.64 | 2200 | 0.4829 | 0.7725 | 0.7726 |
| 0.4233 | 13.79 | 2400 | 0.4823 | 0.7755 | 0.7751 |
| 0.4205 | 14.94 | 2600 | 0.4722 | 0.7765 | 0.7765 |
| 0.4118 | 16.09 | 2800 | 0.4861 | 0.7733 | 0.7729 |
| 0.4088 | 17.24 | 3000 | 0.4833 | 0.7799 | 0.7794 |
| 0.4075 | 18.39 | 3200 | 0.4762 | 0.7748 | 0.7744 |
| 0.4032 | 19.54 | 3400 | 0.4768 | 0.7716 | 0.7711 |
| 0.3952 | 20.69 | 3600 | 0.4839 | 0.7788 | 0.7791 |
| 0.3926 | 21.84 | 3800 | 0.4781 | 0.7741 | 0.7737 |
| 0.391 | 22.99 | 4000 | 0.5085 | 0.7598 | 0.7603 |
| 0.3901 | 24.14 | 4200 | 0.4865 | 0.7719 | 0.7715 |
| 0.3786 | 25.29 | 4400 | 0.5031 | 0.7738 | 0.7733 |
| 0.3817 | 26.44 | 4600 | 0.4994 | 0.7695 | 0.7690 |
| 0.381 | 27.59 | 4800 | 0.4967 | 0.7763 | 0.7758 |
| 0.374 | 28.74 | 5000 | 0.4907 | 0.7727 | 0.7722 |
| 0.3769 | 29.89 | 5200 | 0.5001 | 0.7741 | 0.7737 |
| 0.3672 | 31.03 | 5400 | 0.5043 | 0.7671 | 0.7668 |
| 0.3688 | 32.18 | 5600 | 0.5008 | 0.7745 | 0.7740 |
| 0.3603 | 33.33 | 5800 | 0.5100 | 0.7799 | 0.7794 |
| 0.3643 | 34.48 | 6000 | 0.4972 | 0.7741 | 0.7737 |
| 0.3533 | 35.63 | 6200 | 0.5166 | 0.7758 | 0.7755 |
| 0.3604 | 36.78 | 6400 | 0.5027 | 0.7749 | 0.7744 |
| 0.3553 | 37.93 | 6600 | 0.5220 | 0.7687 | 0.7683 |
| 0.35 | 39.08 | 6800 | 0.5126 | 0.7741 | 0.7737 |
| 0.3499 | 40.23 | 7000 | 0.5196 | 0.7677 | 0.7672 |
| 0.3457 | 41.38 | 7200 | 0.5229 | 0.7684 | 0.7679 |
| 0.3458 | 42.53 | 7400 | 0.5237 | 0.7684 | 0.7679 |
| 0.3435 | 43.68 | 7600 | 0.5272 | 0.7708 | 0.7704 |
| 0.3402 | 44.83 | 7800 | 0.5261 | 0.7709 | 0.7704 |
| 0.3401 | 45.98 | 8000 | 0.5282 | 0.7696 | 0.7693 |
| 0.3397 | 47.13 | 8200 | 0.5327 | 0.7655 | 0.7650 |
| 0.3374 | 48.28 | 8400 | 0.5306 | 0.7691 | 0.7686 |
| 0.3336 | 49.43 | 8600 | 0.5371 | 0.7659 | 0.7654 |
| 0.335 | 50.57 | 8800 | 0.5357 | 0.7687 | 0.7683 |
| 0.3384 | 51.72 | 9000 | 0.5340 | 0.7695 | 0.7690 |
| 0.3308 | 52.87 | 9200 | 0.5367 | 0.7666 | 0.7661 |
| 0.3318 | 54.02 | 9400 | 0.5352 | 0.7677 | 0.7672 |
| 0.3341 | 55.17 | 9600 | 0.5344 | 0.7659 | 0.7654 |
| 0.3304 | 56.32 | 9800 | 0.5349 | 0.7673 | 0.7668 |
| 0.3319 | 57.47 | 10000 | 0.5345 | 0.7673 | 0.7668 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T21:01:47+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | harir/mistral-7b-instruct-v0.1-review-toxicity | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:02:42+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | sherrys/mistralRAFT_50e | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
]
| null | 2024-04-26T21:04:08+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4608
- F1 Score: 0.7911
- Accuracy: 0.7906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5428 | 1.15 | 200 | 0.5428 | 0.7374 | 0.7373 |
| 0.4911 | 2.3 | 400 | 0.5192 | 0.7412 | 0.7424 |
| 0.4694 | 3.45 | 600 | 0.5180 | 0.7467 | 0.7474 |
| 0.4552 | 4.6 | 800 | 0.4854 | 0.7676 | 0.7672 |
| 0.4478 | 5.75 | 1000 | 0.4899 | 0.7629 | 0.7625 |
| 0.4362 | 6.9 | 1200 | 0.4816 | 0.7806 | 0.7801 |
| 0.4286 | 8.05 | 1400 | 0.4899 | 0.7714 | 0.7711 |
| 0.4131 | 9.2 | 1600 | 0.5043 | 0.7677 | 0.7675 |
| 0.4042 | 10.34 | 1800 | 0.5029 | 0.7677 | 0.7675 |
| 0.3993 | 11.49 | 2000 | 0.4941 | 0.7762 | 0.7758 |
| 0.3845 | 12.64 | 2200 | 0.4977 | 0.7681 | 0.7679 |
| 0.3813 | 13.79 | 2400 | 0.5050 | 0.7671 | 0.7672 |
| 0.3701 | 14.94 | 2600 | 0.5067 | 0.7630 | 0.7639 |
| 0.3569 | 16.09 | 2800 | 0.5451 | 0.7525 | 0.7531 |
| 0.3492 | 17.24 | 3000 | 0.5157 | 0.7690 | 0.7686 |
| 0.3422 | 18.39 | 3200 | 0.5235 | 0.7674 | 0.7672 |
| 0.3334 | 19.54 | 3400 | 0.5483 | 0.7607 | 0.7603 |
| 0.3224 | 20.69 | 3600 | 0.5445 | 0.7689 | 0.7686 |
| 0.3144 | 21.84 | 3800 | 0.5174 | 0.7727 | 0.7722 |
| 0.3057 | 22.99 | 4000 | 0.5967 | 0.7518 | 0.7524 |
| 0.304 | 24.14 | 4200 | 0.5790 | 0.7580 | 0.7575 |
| 0.2867 | 25.29 | 4400 | 0.5979 | 0.7588 | 0.7589 |
| 0.2816 | 26.44 | 4600 | 0.5985 | 0.7637 | 0.7632 |
| 0.2795 | 27.59 | 4800 | 0.6115 | 0.7708 | 0.7704 |
| 0.2665 | 28.74 | 5000 | 0.6015 | 0.7566 | 0.7564 |
| 0.2717 | 29.89 | 5200 | 0.5972 | 0.7655 | 0.7650 |
| 0.2551 | 31.03 | 5400 | 0.6186 | 0.7604 | 0.7600 |
| 0.248 | 32.18 | 5600 | 0.6615 | 0.7590 | 0.7585 |
| 0.2432 | 33.33 | 5800 | 0.6447 | 0.7752 | 0.7747 |
| 0.237 | 34.48 | 6000 | 0.6588 | 0.7666 | 0.7661 |
| 0.2305 | 35.63 | 6200 | 0.6836 | 0.7612 | 0.7607 |
| 0.2316 | 36.78 | 6400 | 0.6486 | 0.7651 | 0.7647 |
| 0.2246 | 37.93 | 6600 | 0.6591 | 0.7580 | 0.7575 |
| 0.2174 | 39.08 | 6800 | 0.6870 | 0.7594 | 0.7589 |
| 0.2112 | 40.23 | 7000 | 0.6890 | 0.7590 | 0.7585 |
| 0.2073 | 41.38 | 7200 | 0.7309 | 0.7508 | 0.7503 |
| 0.206 | 42.53 | 7400 | 0.7128 | 0.7547 | 0.7542 |
| 0.2043 | 43.68 | 7600 | 0.7207 | 0.7630 | 0.7625 |
| 0.1981 | 44.83 | 7800 | 0.7241 | 0.7512 | 0.7506 |
| 0.195 | 45.98 | 8000 | 0.7531 | 0.7499 | 0.7496 |
| 0.194 | 47.13 | 8200 | 0.7291 | 0.7522 | 0.7517 |
| 0.1869 | 48.28 | 8400 | 0.7713 | 0.7565 | 0.7560 |
| 0.184 | 49.43 | 8600 | 0.7801 | 0.7565 | 0.7560 |
| 0.186 | 50.57 | 8800 | 0.7840 | 0.7583 | 0.7578 |
| 0.1861 | 51.72 | 9000 | 0.7701 | 0.7576 | 0.7571 |
| 0.1811 | 52.87 | 9200 | 0.7714 | 0.7590 | 0.7585 |
| 0.1827 | 54.02 | 9400 | 0.7581 | 0.7562 | 0.7557 |
| 0.1784 | 55.17 | 9600 | 0.7658 | 0.7558 | 0.7553 |
| 0.1766 | 56.32 | 9800 | 0.7785 | 0.7569 | 0.7564 |
| 0.1769 | 57.47 | 10000 | 0.7781 | 0.7576 | 0.7571 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T21:04:44+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/gko6wa8 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:05:36+00:00 |
text-generation | transformers |
# griffin-llama3t-8L-v0.02-fineweb
Pretraining experiment with griffin/recurrent_gemma arch. This one uses the Llama-3 tokenizer.
## Model description
Further training of [pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu](https://huggingface.co/pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu) on the BEE-spoke-data/fineweb-1M_en-med dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6538
- Accuracy: 0.1881
- Num Input Tokens Seen: 766509056
## evals
tl;dr its bad/would need more training:
hf (pretrained=pszemraj/griffin-llama3t-8L-v0.02-fineweb,trust_remote_code=True,dtype=float), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 4
| Tasks |Version|Filter|n-shot| Metric | Value | | Stderr |
|--------------|------:|------|-----:|----------|----------:|---|---------:|
|winogrande | 1|none | 0|acc | 0.4964|± | 0.0141|
|piqa | 1|none | 0|acc | 0.5332|± | 0.0116|
| | |none | 0|acc_norm | 0.5299|± | 0.0116|
|openbookqa | 1|none | 0|acc | 0.1280|± | 0.0150|
| | |none | 0|acc_norm | 0.2320|± | 0.0189|
|lambada_openai| 1|none | 0|perplexity|638060.0702|± |43608.0044|
| | |none | 0|acc | 0.0000|± | 0.0000|
|boolq | 2|none | 0|acc | 0.3783|± | 0.0085|
|arc_easy | 1|none | 0|acc | 0.2614|± | 0.0090|
| | |none | 0|acc_norm | 0.2744|± | 0.0092|
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 80085
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:|
| 6.4019 | 0.0684 | 400 | 6.7690 | 0.1278 | 52428800 |
| 6.0547 | 0.1368 | 800 | 6.4214 | 0.1460 | 104857600 |
| 5.8133 | 0.2052 | 1200 | 6.2566 | 0.1550 | 157286400 |
| 5.7212 | 0.2736 | 1600 | 6.1411 | 0.1620 | 209715200 |
| 5.6175 | 0.3420 | 2000 | 6.0502 | 0.1669 | 262144000 |
| 5.5014 | 0.4104 | 2400 | 5.9827 | 0.1687 | 314572800 |
| 5.4882 | 0.4788 | 2800 | 5.9203 | 0.1731 | 367001600 |
| 5.3972 | 0.5472 | 3200 | 5.8614 | 0.1782 | 419430400 |
| 5.3983 | 0.6156 | 3600 | 5.8340 | 0.1773 | 471859200 |
| 5.3175 | 0.6840 | 4000 | 5.7916 | 0.1814 | 524288000 |
| 5.3014 | 0.7524 | 4400 | 5.7565 | 0.1814 | 576716800 |
| 5.2749 | 0.8208 | 4800 | 5.7303 | 0.1849 | 629145600 |
| 5.2264 | 0.8892 | 5200 | 5.6993 | 0.1850 | 681574400 |
| 5.2107 | 0.9576 | 5600 | 5.6745 | 0.1884 | 734003200 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["BEE-spoke-data/fineweb-1M_en-med"], "metrics": ["accuracy"], "base_model": "pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu", "model-index": [{"name": "griffin-1024-llama3t-8layer-simplewiki-silu-fineweb-1M_en-med-vN", "results": []}]} | pszemraj/griffin-llama3t-8L-v0.02-fineweb | null | [
"transformers",
"safetensors",
"recurrent_gemma",
"text-generation",
"generated_from_trainer",
"en",
"dataset:BEE-spoke-data/fineweb-1M_en-med",
"base_model:pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:06:07+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Hajas0/hun_emotion_modifier | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:06:58+00:00 |
null | null | {} | MrJordanL/KarlMollisonRVCV2 | null | [
"region:us"
]
| null | 2024-04-26T21:08:36+00:00 |
|
token-classification | transformers | {"license": "apache-2.0"} | Gowrisankarv/bert-ner-custom | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:10:58+00:00 |
|
null | null | {} | Marmotita/Jugador1 | null | [
"region:us"
]
| null | 2024-04-26T21:11:16+00:00 |
|
text-generation | transformers | {} | Manavshah/llama4-dolphin-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:11:23+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5973
- F1 Score: 0.7040
- Accuracy: 0.7038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6332 | 0.87 | 200 | 0.6018 | 0.6704 | 0.6704 |
| 0.5958 | 1.74 | 400 | 0.5919 | 0.6796 | 0.6796 |
| 0.5838 | 2.61 | 600 | 0.5895 | 0.6852 | 0.6853 |
| 0.5763 | 3.48 | 800 | 0.5846 | 0.6876 | 0.6875 |
| 0.5677 | 4.35 | 1000 | 0.5921 | 0.6824 | 0.6826 |
| 0.5652 | 5.22 | 1200 | 0.5783 | 0.6956 | 0.6954 |
| 0.5593 | 6.09 | 1400 | 0.5866 | 0.6958 | 0.6978 |
| 0.5543 | 6.96 | 1600 | 0.5845 | 0.6953 | 0.6954 |
| 0.5483 | 7.83 | 1800 | 0.5852 | 0.6892 | 0.6891 |
| 0.5441 | 8.7 | 2000 | 0.5941 | 0.6931 | 0.6929 |
| 0.5396 | 9.57 | 2200 | 0.5743 | 0.7011 | 0.7011 |
| 0.538 | 10.43 | 2400 | 0.5905 | 0.7028 | 0.7027 |
| 0.5338 | 11.3 | 2600 | 0.5764 | 0.6974 | 0.6981 |
| 0.5368 | 12.17 | 2800 | 0.5788 | 0.6922 | 0.6924 |
| 0.5281 | 13.04 | 3000 | 0.5787 | 0.6911 | 0.6908 |
| 0.5243 | 13.91 | 3200 | 0.5804 | 0.7035 | 0.7035 |
| 0.52 | 14.78 | 3400 | 0.5841 | 0.6971 | 0.6976 |
| 0.5188 | 15.65 | 3600 | 0.5839 | 0.7026 | 0.7024 |
| 0.5117 | 16.52 | 3800 | 0.5833 | 0.6984 | 0.6981 |
| 0.5123 | 17.39 | 4000 | 0.5941 | 0.6931 | 0.6929 |
| 0.5094 | 18.26 | 4200 | 0.6008 | 0.6993 | 0.6995 |
| 0.5067 | 19.13 | 4400 | 0.5939 | 0.6957 | 0.6954 |
| 0.5021 | 20.0 | 4600 | 0.5888 | 0.6989 | 0.7 |
| 0.5014 | 20.87 | 4800 | 0.5931 | 0.7035 | 0.7035 |
| 0.4989 | 21.74 | 5000 | 0.5859 | 0.6997 | 0.6995 |
| 0.4973 | 22.61 | 5200 | 0.5988 | 0.7046 | 0.7043 |
| 0.4939 | 23.48 | 5400 | 0.5977 | 0.7018 | 0.7024 |
| 0.4883 | 24.35 | 5600 | 0.5954 | 0.6993 | 0.7003 |
| 0.4912 | 25.22 | 5800 | 0.5949 | 0.7028 | 0.7027 |
| 0.4846 | 26.09 | 6000 | 0.6026 | 0.7021 | 0.7024 |
| 0.4873 | 26.96 | 6200 | 0.6011 | 0.7015 | 0.7027 |
| 0.4811 | 27.83 | 6400 | 0.6024 | 0.7019 | 0.7024 |
| 0.4842 | 28.7 | 6600 | 0.6047 | 0.7005 | 0.7005 |
| 0.4798 | 29.57 | 6800 | 0.5992 | 0.7019 | 0.7019 |
| 0.4748 | 30.43 | 7000 | 0.6004 | 0.7039 | 0.7038 |
| 0.4818 | 31.3 | 7200 | 0.6029 | 0.7030 | 0.7030 |
| 0.4738 | 32.17 | 7400 | 0.6089 | 0.7035 | 0.7033 |
| 0.4734 | 33.04 | 7600 | 0.6043 | 0.7049 | 0.7046 |
| 0.4724 | 33.91 | 7800 | 0.6026 | 0.7013 | 0.7016 |
| 0.4717 | 34.78 | 8000 | 0.6066 | 0.7054 | 0.7052 |
| 0.4678 | 35.65 | 8200 | 0.6146 | 0.6989 | 0.6986 |
| 0.467 | 36.52 | 8400 | 0.6101 | 0.7035 | 0.7033 |
| 0.4675 | 37.39 | 8600 | 0.6093 | 0.7052 | 0.7049 |
| 0.4609 | 38.26 | 8800 | 0.6144 | 0.7014 | 0.7016 |
| 0.4701 | 39.13 | 9000 | 0.6064 | 0.7044 | 0.7043 |
| 0.4623 | 40.0 | 9200 | 0.6104 | 0.7062 | 0.7060 |
| 0.4589 | 40.87 | 9400 | 0.6133 | 0.7019 | 0.7016 |
| 0.463 | 41.74 | 9600 | 0.6109 | 0.7043 | 0.7041 |
| 0.4634 | 42.61 | 9800 | 0.6103 | 0.7032 | 0.7030 |
| 0.4577 | 43.48 | 10000 | 0.6116 | 0.7040 | 0.7038 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T21:11:47+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5650
- F1 Score: 0.7048
- Accuracy: 0.7049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6432 | 0.87 | 200 | 0.6165 | 0.6638 | 0.6641 |
| 0.6108 | 1.74 | 400 | 0.6013 | 0.6774 | 0.6772 |
| 0.6 | 2.61 | 600 | 0.5975 | 0.6740 | 0.675 |
| 0.5904 | 3.48 | 800 | 0.5925 | 0.6789 | 0.6796 |
| 0.5862 | 4.35 | 1000 | 0.5976 | 0.6748 | 0.6772 |
| 0.5837 | 5.22 | 1200 | 0.5871 | 0.6850 | 0.6864 |
| 0.5789 | 6.09 | 1400 | 0.5926 | 0.6843 | 0.6861 |
| 0.5751 | 6.96 | 1600 | 0.5854 | 0.6834 | 0.6832 |
| 0.5716 | 7.83 | 1800 | 0.5896 | 0.6761 | 0.6774 |
| 0.569 | 8.7 | 2000 | 0.5889 | 0.6859 | 0.6856 |
| 0.567 | 9.57 | 2200 | 0.5760 | 0.6869 | 0.6870 |
| 0.5665 | 10.43 | 2400 | 0.5823 | 0.6916 | 0.6913 |
| 0.5622 | 11.3 | 2600 | 0.5757 | 0.6900 | 0.6897 |
| 0.5658 | 12.17 | 2800 | 0.5766 | 0.6880 | 0.6880 |
| 0.5611 | 13.04 | 3000 | 0.5799 | 0.6917 | 0.6916 |
| 0.5585 | 13.91 | 3200 | 0.5750 | 0.6940 | 0.6937 |
| 0.5556 | 14.78 | 3400 | 0.5772 | 0.6939 | 0.6943 |
| 0.5572 | 15.65 | 3600 | 0.5763 | 0.6949 | 0.6946 |
| 0.5507 | 16.52 | 3800 | 0.5802 | 0.6937 | 0.6935 |
| 0.5539 | 17.39 | 4000 | 0.5754 | 0.6975 | 0.6973 |
| 0.5526 | 18.26 | 4200 | 0.5799 | 0.6991 | 0.6989 |
| 0.5506 | 19.13 | 4400 | 0.5792 | 0.6945 | 0.6943 |
| 0.5481 | 20.0 | 4600 | 0.5740 | 0.7030 | 0.7033 |
| 0.5481 | 20.87 | 4800 | 0.5770 | 0.7003 | 0.7003 |
| 0.5488 | 21.74 | 5000 | 0.5765 | 0.6978 | 0.6976 |
| 0.5472 | 22.61 | 5200 | 0.5760 | 0.7022 | 0.7019 |
| 0.5451 | 23.48 | 5400 | 0.5786 | 0.6971 | 0.6986 |
| 0.5438 | 24.35 | 5600 | 0.5770 | 0.6996 | 0.6997 |
| 0.5451 | 25.22 | 5800 | 0.5758 | 0.7026 | 0.7033 |
| 0.5398 | 26.09 | 6000 | 0.5825 | 0.6993 | 0.6997 |
| 0.5445 | 26.96 | 6200 | 0.5784 | 0.7024 | 0.7033 |
| 0.539 | 27.83 | 6400 | 0.5798 | 0.6992 | 0.7 |
| 0.5415 | 28.7 | 6600 | 0.5787 | 0.7003 | 0.7 |
| 0.5385 | 29.57 | 6800 | 0.5747 | 0.7048 | 0.7046 |
| 0.5353 | 30.43 | 7000 | 0.5783 | 0.7036 | 0.7041 |
| 0.5421 | 31.3 | 7200 | 0.5766 | 0.7032 | 0.7033 |
| 0.5388 | 32.17 | 7400 | 0.5753 | 0.7044 | 0.7043 |
| 0.5366 | 33.04 | 7600 | 0.5734 | 0.7035 | 0.7033 |
| 0.5372 | 33.91 | 7800 | 0.5777 | 0.7014 | 0.7016 |
| 0.5361 | 34.78 | 8000 | 0.5769 | 0.7032 | 0.7030 |
| 0.5349 | 35.65 | 8200 | 0.5768 | 0.7032 | 0.7030 |
| 0.5339 | 36.52 | 8400 | 0.5764 | 0.7048 | 0.7046 |
| 0.5352 | 37.39 | 8600 | 0.5759 | 0.7034 | 0.7033 |
| 0.5284 | 38.26 | 8800 | 0.5802 | 0.7026 | 0.7030 |
| 0.5395 | 39.13 | 9000 | 0.5747 | 0.7060 | 0.7063 |
| 0.5328 | 40.0 | 9200 | 0.5767 | 0.7039 | 0.7038 |
| 0.5306 | 40.87 | 9400 | 0.5771 | 0.7043 | 0.7041 |
| 0.5328 | 41.74 | 9600 | 0.5774 | 0.7044 | 0.7043 |
| 0.5359 | 42.61 | 9800 | 0.5761 | 0.7039 | 0.7038 |
| 0.5272 | 43.48 | 10000 | 0.5771 | 0.7048 | 0.7046 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T21:11:47+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/AiMavenAi/Herd-1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "AiMavenAi/Herd-1", "quantized_by": "mradermacher"} | mradermacher/Herd-1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:AiMavenAi/Herd-1",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:14:18+00:00 |
null | null | quantized_by: KnightCodin
---
## Exllama v2 Quantizations of winglian/Llama-3-8b-64k-PoSE
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- axolotl
---
## Llama 3 8B 64K https://huggingface.co/winglian/Llama-3-8b-64k-PoSE/tree/main
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<img src="https://huggingface.co/winglian/Llama-3-8b-64k-PoSE/resolve/main/output.png" />
This model uses [PoSE](https://huggingface.co/papers/2309.10400) to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0.
We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens.
We have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k.
This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. [WandB](https://wandb.ai/oaaic/llama-3-64k/runs/tkcyjt37)
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase.
### Use with transformers
See the snippet below for usage with Transformers:
```python
>>> import transformers
>>> import torch
>>> model_id = "meta-llama/Meta-Llama-3-8B"
>>> pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
>>> pipeline("Hey how are you doing today?")
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| {"language": ["en"], "license": "cc-by-nc-4.0"} | Knightcodin/Llama-3-8b-64k-PoSE-exl2 | null | [
"en",
"arxiv:2309.10400",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-04-26T21:15:22+00:00 |
null | null | {"license": "llama3"} | mashjhjd/ensayo | null | [
"license:llama3",
"region:us"
]
| null | 2024-04-26T21:16:17+00:00 |
|
null | null | {"license": "mit"} | Rusvo/s25 | null | [
"license:mit",
"region:us"
]
| null | 2024-04-26T21:17:08+00:00 |
|
text-generation | mlx |
# mlx-community/Meta-Llama-3-8B-Instruct
This model was converted to MLX format from [`meta-llama/Meta-Llama-3-8B-Instruct`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Meta-Llama-3-8B-Instruct")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "mlx"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "widget": [{"example_title": "Hello", "messages": [{"role": "user", "content": "Hey my name is Julien! How are you?"}]}, {"example_title": "Winter holidays", "messages": [{"role": "system", "content": "You are a helpful and honest assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Can you recommend a good destination for Winter holidays?"}]}, {"example_title": "Programming assistant", "messages": [{"role": "system", "content": "You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Write a function that computes the nth fibonacci number."}]}], "inference": {"parameters": {"max_new_tokens": 300, "stop": ["<|end_of_text|>", "<|eot_id|>"]}}} | mlx-community/Meta-Llama-3-8B-Instruct | null | [
"mlx",
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"conversational",
"en",
"license:other",
"region:us"
]
| null | 2024-04-26T21:19:33+00:00 |
token-classification | transformers | {"license": "apache-2.0"} | tandrievich/LaBSE-finetuned | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:21:43+00:00 |
|
text-generation | transformers |
# Uploaded model
- **Developed by:** jjohnsondev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | jjohnsondev/Mistral-7B-Summarizer-QLoRA | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:24:52+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]} | basakerdogan/Cyber-Jarvis-4Bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us",
"has_space"
]
| null | 2024-04-26T21:24:59+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-ar
This model is a fine-tuned version of [tner/xlm-roberta-base-panx-dataset-ar](https://huggingface.co/tner/xlm-roberta-base-panx-dataset-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1977
- F1: 0.8803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2179 | 1.0 | 188 | 0.1977 | 0.8803 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "tner/xlm-roberta-base-panx-dataset-ar", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-ar", "results": []}]} | Awayes/xlm-roberta-base-finetuned-panx-ar | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:tner/xlm-roberta-base-panx-dataset-ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:24:59+00:00 |
text-generation | transformers | {} | Weni/WeniGPT-Agents-Mistral-1.0.19-SFT-1.0.33-DPO-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-26T21:25:10+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2", "results": []}]} | ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:25:43+00:00 |
null | null | {"license": "llama3"} | Nicknor/Llama3 | null | [
"license:llama3",
"region:us"
]
| null | 2024-04-26T21:25:52+00:00 |
|
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "aaditya/OpenBioLLM-Llama3-70B", "quantized_by": "mradermacher"} | mradermacher/OpenBioLLM-Llama3-70B-GGUF | null | [
"transformers",
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"base_model:aaditya/OpenBioLLM-Llama3-70B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:26:36+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MrezaPRZ/CodeLLama_SFT_FULL | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:26:40+00:00 |
null | null | {} | Jojosu/Heia | null | [
"region:us"
]
| null | 2024-04-26T21:26:41+00:00 |
|
null | null | {} | seagater/code-llama-7b-text-to-sql | null | [
"region:us"
]
| null | 2024-04-26T21:27:39+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | grahamaco/Mixtral-8x7B-Instruct-v0.1-touch-rugby-rules-adapters | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:27:45+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NHS-BiomedNLP-BiomedBERT-hypop-512
This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3839
- Accuracy: 0.8269
- Precision: 0.8228
- Recall: 0.8237
- F1: 0.8232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.124 | 1.0 | 397 | 0.4029 | 0.8177 | 0.8146 | 0.8129 | 0.8137 |
| 0.0594 | 2.0 | 794 | 0.4561 | 0.8246 | 0.8245 | 0.8161 | 0.8192 |
| 0.1105 | 3.0 | 1191 | 0.5390 | 0.8120 | 0.8119 | 0.8028 | 0.8059 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract", "model-index": [{"name": "NHS-BiomedNLP-BiomedBERT-hypop-512", "results": []}]} | NIHNCATS/NHS-BiomedNLP-BiomedBERT-hypop-512 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:30:20+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6169
- F1 Score: 0.7002
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6239 | 0.87 | 200 | 0.5985 | 0.6788 | 0.6793 |
| 0.5862 | 1.74 | 400 | 0.5935 | 0.6854 | 0.6861 |
| 0.5743 | 2.61 | 600 | 0.5918 | 0.6874 | 0.6875 |
| 0.5648 | 3.48 | 800 | 0.5885 | 0.6931 | 0.6929 |
| 0.5541 | 4.35 | 1000 | 0.6055 | 0.6882 | 0.6880 |
| 0.5479 | 5.22 | 1200 | 0.5793 | 0.6969 | 0.6967 |
| 0.541 | 6.09 | 1400 | 0.5860 | 0.6984 | 0.6992 |
| 0.5342 | 6.96 | 1600 | 0.5830 | 0.7035 | 0.7033 |
| 0.5227 | 7.83 | 1800 | 0.5826 | 0.6962 | 0.6959 |
| 0.5144 | 8.7 | 2000 | 0.5969 | 0.7025 | 0.7022 |
| 0.5064 | 9.57 | 2200 | 0.5766 | 0.7030 | 0.7033 |
| 0.5015 | 10.43 | 2400 | 0.6176 | 0.7093 | 0.7092 |
| 0.4935 | 11.3 | 2600 | 0.5811 | 0.7026 | 0.7035 |
| 0.4908 | 12.17 | 2800 | 0.6091 | 0.6883 | 0.6905 |
| 0.4811 | 13.04 | 3000 | 0.5796 | 0.7064 | 0.7063 |
| 0.4709 | 13.91 | 3200 | 0.5845 | 0.7144 | 0.7141 |
| 0.4587 | 14.78 | 3400 | 0.6026 | 0.7110 | 0.7109 |
| 0.4555 | 15.65 | 3600 | 0.6061 | 0.7163 | 0.7163 |
| 0.4414 | 16.52 | 3800 | 0.6199 | 0.7123 | 0.7122 |
| 0.4388 | 17.39 | 4000 | 0.6460 | 0.7095 | 0.7092 |
| 0.4313 | 18.26 | 4200 | 0.6381 | 0.7134 | 0.7133 |
| 0.4264 | 19.13 | 4400 | 0.6426 | 0.7141 | 0.7139 |
| 0.4191 | 20.0 | 4600 | 0.6407 | 0.7067 | 0.7068 |
| 0.4071 | 20.87 | 4800 | 0.6623 | 0.7118 | 0.7117 |
| 0.4026 | 21.74 | 5000 | 0.6646 | 0.7055 | 0.7054 |
| 0.3947 | 22.61 | 5200 | 0.6809 | 0.7034 | 0.7033 |
| 0.3927 | 23.48 | 5400 | 0.6699 | 0.7071 | 0.7068 |
| 0.3836 | 24.35 | 5600 | 0.6672 | 0.7075 | 0.7079 |
| 0.3777 | 25.22 | 5800 | 0.7169 | 0.7033 | 0.7033 |
| 0.3736 | 26.09 | 6000 | 0.7113 | 0.7071 | 0.7068 |
| 0.3693 | 26.96 | 6200 | 0.7191 | 0.7098 | 0.7095 |
| 0.3574 | 27.83 | 6400 | 0.7157 | 0.7106 | 0.7103 |
| 0.358 | 28.7 | 6600 | 0.7305 | 0.6995 | 0.6995 |
| 0.354 | 29.57 | 6800 | 0.7093 | 0.7080 | 0.7079 |
| 0.3459 | 30.43 | 7000 | 0.7316 | 0.7030 | 0.7027 |
| 0.3477 | 31.3 | 7200 | 0.7457 | 0.7046 | 0.7043 |
| 0.3398 | 32.17 | 7400 | 0.7478 | 0.7072 | 0.7071 |
| 0.3402 | 33.04 | 7600 | 0.7307 | 0.7052 | 0.7049 |
| 0.3345 | 33.91 | 7800 | 0.7317 | 0.7090 | 0.7090 |
| 0.3319 | 34.78 | 8000 | 0.7630 | 0.7046 | 0.7043 |
| 0.3208 | 35.65 | 8200 | 0.7667 | 0.7060 | 0.7057 |
| 0.3236 | 36.52 | 8400 | 0.7576 | 0.7063 | 0.7060 |
| 0.3226 | 37.39 | 8600 | 0.7906 | 0.7081 | 0.7079 |
| 0.3161 | 38.26 | 8800 | 0.7812 | 0.7079 | 0.7076 |
| 0.3236 | 39.13 | 9000 | 0.7644 | 0.7073 | 0.7071 |
| 0.3129 | 40.0 | 9200 | 0.7809 | 0.7065 | 0.7063 |
| 0.3078 | 40.87 | 9400 | 0.7810 | 0.7092 | 0.7090 |
| 0.3135 | 41.74 | 9600 | 0.7768 | 0.7106 | 0.7103 |
| 0.3145 | 42.61 | 9800 | 0.7797 | 0.7087 | 0.7084 |
| 0.307 | 43.48 | 10000 | 0.7809 | 0.7087 | 0.7084 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T21:31:53+00:00 |
null | null | {} | H2Z1024/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2024-04-26T21:33:43+00:00 |
|
null | transformers |
# xhluca/Llama-3-8B-Web-Q4_K_M-GGUF
This model was converted to GGUF format from [`McGill-NLP/Llama-3-8B-Web`](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo xhluca/Llama-3-8B-Web-Q4_K_M-GGUF --model llama-3-8b-web.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo xhluca/Llama-3-8B-Web-Q4_K_M-GGUF --model llama-3-8b-web.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-web.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["agents", "agent", "llm", "llama", "llama-cpp", "gguf-my-repo"], "datasets": ["McGill-NLP/WebLINX"]} | xhluca/Llama-3-8B-Web-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"agents",
"agent",
"llm",
"llama",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:McGill-NLP/WebLINX",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:33:44+00:00 |
text-generation | transformers |
# c4ai-command-r-plus - llamafile
This repository contains executable weights (which we call
[llamafiles](https://github.com/Mozilla-Ocho/llamafile)) that run on
Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64.
- Model creator: [CohereForAI](https://huggingface.co/CohereForAI)
- Original model: [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)
## Quickstart
You can run the following command which download, concatenate, and
execute the model.
```
wget https://huggingface.co/jartine/c4ai-command-r-plus-llamafile/resolve/main/c4ai-command-r-plus.Q2_K.llamafile
chmod +x c4ai-command-r-plus.Q2_K.llamafile
./c4ai-command-r-plus.Q2_K.llamafile --help # view manual
./c4ai-command-r-plus.Q2_K.llamafile # launch web gui + oai api
./c4ai-command-r-plus.Q2_K.llamafile -p ... # cli interface (scriptable)
```
Alternatively, you may download an official `llamafile` executable from
Mozilla Ocho on GitHub, in which case you can use the Mixtral llamafiles
as a simple weights data file.
```
llamafile -m ./c4ai-command-r-plus.Q2_K.llamafile ...
```
For further information, please see the [llamafile
README](https://github.com/mozilla-ocho/llamafile/).
Having **trouble?** See the ["Gotchas"
section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas)
of the README.
## About Upload Limits
Files which exceed the Hugging Face 50GB upload limit have a .cat𝑋
extension. You need to use the `cat` command locally to turn them back
into a single file, using the same order.
## Prompting
Prompt template:
```
<BOS_TOKEN>
<|START_OF_TURN_TOKEN|>
<|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|>
<|START_OF_TURN_TOKEN|>
<|CHATBOT_TOKEN|>
```
## About llamafile
llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023.
It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
binaries that run on the stock installs of six OSes for both ARM64 and
AMD64.
In addition to being executables, llamafiles are also zip archives. Each
llamafile contains a GGUF file, which you can extract using the `unzip`
command. If you want to change or add files to your llamafiles, then the
`zipalign` command (distributed on the llamafile github) should be used
instead of the traditional `zip` command.
## License
The Command-R-Plus license requires:
- You can't use these weights for commercial purposes
- You have to give Cohere credit if you share or fine tune it
- You can't use it for purposes they consider unacceptable, such as
spam, misinformation, etc. The license says they can change the
definition of acceptable use at will.
- The CC-BY-NC 4.0 stipulates no downstream restrictions, so you can't
tack on your own list of unacceptable uses too if you create and
distribute a fine-tuned version.
## About Quantization Formats (General Advice)
Your choice of quantization format depends on three things:
1. Will it fit in RAM or VRAM?
2. Is your use case reading (e.g. summarization) or writing (e.g. chatbot)?
3. llamafiles bigger than 4.30 GB are hard to run on Windows (see [gotchas](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas))
Good quants for writing (prediction speed) are Q5\_K\_M, and Q4\_0. Text
generation is bounded by memory speed, so smaller quants help, but they
cause the LLM to hallucinate more. However that doesn't mean they can't
think correctly. A highly degraded quant like `Q2_K` may not make a
great encyclopedia, but it's still capable of logical reasoning and
the emergent capabilities LLMs exhibit.
Good quants for reading (evaluation speed) are BF16, F16, Q8\_0, and
Q4\_0 (ordered from fastest to slowest). Prompt evaluation is bounded by
flop count, which means perf can be improved through software
engineering alone, e.g. BLAS algorithms, in which case quantization
starts hurting more than it helps, since it competes for CPU resources
and makes it harder for the compiler to parallelize instructions. You
want to ideally use the simplest smallest floating point format that's
natively implemented by your hardware. In most cases, that's BF16 or
FP16. However, llamafile is able to still offer respectable tinyBLAS
speedups for llama.cpp's simplest quants: Q8\_0 and Q4\_0.
---
# Model Card for C4AI Command R+
🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit)**.
## Model Summary
C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering.
C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: c4ai-command-r-plus
- Model Size: 104 billion parameters
- Context length: 128K
**Try C4AI Command R+**
You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
**Usage**
Please install `transformers` from the source repository that includes the necessary changes for this model.
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 8-bit precision**
```python
# pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 4-bit precision**
This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
**Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
**Context length**: Command R+ supports a context length of 128K.
## Evaluations
Command R+ has been submitted to the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a [standardized way](https://github.com/EleutherAI/lm-evaluation-harness) using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way.
| Model | Average | Arc (Challenge) | Hella Swag | MMLU | Truthful QA | Winogrande | GSM8k |
|:--------------------------------|----------:|------------------:|-------------:|-------:|--------------:|-------------:|--------:|
| **CohereForAI/c4ai-command-r-plus** | 74.6 | 70.99 | 88.6 | 75.7 | 56.3 | 85.4 | 70.7 |
| [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct) | 74.5 | 68.9 | 89 | 73.7 | 66.9 | 81.8 | 66.9 |
| [Mixtral 8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.7 | 70.1 | 87.6 | 71.4 | 65 | 81.1 | 61.1 |
| [Mixtral 8x7B Chat](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.6 | 70.2 | 87.6 | 71.2 | 64.6 | 81.4 | 60.7 |
| [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 68.5 | 65.5 | 87 | 68.2 | 52.3 | 81.5 | 56.6 |
| [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) | 67.9 | 67.3 | 87.3 | 69.8 | 44.9 | 83.7 | 54.1 |
| [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 65.3 | 65.4 | 84.2 | 74.9 | 55.4 | 80.1 | 31.9 |
| [Gemma-7B](https://huggingface.co/google/gemma-7b) | 63.8 | 61.1 | 82.2 | 64.6 | 44.8 | 79 | 50.9 |
| [LLama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | 62.4 | 64.6 | 85.9 | 63.9 | 52.8 | 80.5 | 26.7 |
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 61 | 60 | 83.3 | 64.2 | 42.2 | 78.4 | 37.8 |
We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more [here](https://txt.cohere.com/command-r-plus-microsoft-azure/). For evaluation of open ended generation, Command R+ is currently being evaluated on the [chatbot arena](https://chat.lmsys.org/).
### Tool use & multihop capabilities:
Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.
Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once.
The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions.
We recommend including the `directly_answer` tool, but it can be removed or renamed if required.
Comprehensive documentation for working with command R+'s tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use:
tools = [
{
"name": "internet_search",
"description": "Returns a list of relevant document snippets for a textual query retrieved from the internet",
"parameter_definitions": {
"query": {
"description": "Query to search the internet with",
"type": 'str',
"required": True
}
}
},
{
'name': "directly_answer",
"description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
'parameter_definitions': {}
}
]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_tool_use_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.
## Available Tools
Here is a list of tools that you have available to you:
```python
def internet_search(query: str) -> List[Dict]:
"""Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query (str): Query to search the internet with
"""
pass
```
```python
def directly_answer() -> List[Dict]:
"""Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:
```json
[
{
"tool_name": title of the tool in the specification,
"parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters
}
]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary>
````
Action: ```json
[
{
"tool_name": "internet_search",
"parameters": {
"query": "biggest penguin in the world"
}
}
]
```
````
</details>
### Grounded Generation and RAG Capabilities:
Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.
Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.
By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation.
The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.
Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary>
````python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# define documents to ground on:
documents = [
{ "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." },
{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."}
]
# render the tool use prompt as a string:
grounded_generation_prompt = tokenizer.apply_grounded_generation_template(
conversation,
documents=documents,
citation_mode="accurate", # or "fast"
tokenize=False,
add_generation_prompt=True,
)
print(grounded_generation_prompt)
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary>
````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results>
Document: 0
title: Tall penguins
text: Emperor penguins are the tallest growing up to 122 cm in height.
Document: 1
title: Penguin habitats
text: Emperor penguins only live in Antarctica.
</results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line.
Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'.
Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'.
Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.
Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary>
````
Relevant Documents: 0,1
Cited Documents: 0,1
Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres.
Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0>
````
</details>
### Code Capabilities:
Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.
### Model Card Contact
For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]).
### Terms of Use:
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try Chat:
You can try Command R+ chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
| {"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "other", "library_name": "transformers", "tags": ["llamafile"], "base_model": "CohereForAI/c4ai-command-r-plus", "model_creator": "CohereForAI", "quantized_by": "jartine", "license_link": "LICENSE", "pipeline_tag": "text-generation", "prompt_template": "<BOS_TOKEN>\n<|START_OF_TURN_TOKEN|>\n<|USER_TOKEN|>{{prompt}}<|END_OF_TURN_TOKEN|>\n<|START_OF_TURN_TOKEN|>\n<|CHATBOT_TOKEN|>\n"} | jartine/c4ai-command-r-plus-llamafile | null | [
"transformers",
"llamafile",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"base_model:CohereForAI/c4ai-command-r-plus",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:34:19+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2585
- F1 Score: 0.9143
- Accuracy: 0.9144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.309 | 2.17 | 200 | 0.2690 | 0.8966 | 0.8966 |
| 0.2557 | 4.35 | 400 | 0.2645 | 0.9009 | 0.9008 |
| 0.2401 | 6.52 | 600 | 0.2567 | 0.9006 | 0.9008 |
| 0.2308 | 8.7 | 800 | 0.2602 | 0.9017 | 0.9014 |
| 0.2178 | 10.87 | 1000 | 0.2584 | 0.9025 | 0.9028 |
| 0.2115 | 13.04 | 1200 | 0.2571 | 0.9068 | 0.9069 |
| 0.2007 | 15.22 | 1400 | 0.2609 | 0.9057 | 0.9055 |
| 0.194 | 17.39 | 1600 | 0.2666 | 0.9071 | 0.9069 |
| 0.1873 | 19.57 | 1800 | 0.2715 | 0.9082 | 0.9083 |
| 0.1768 | 21.74 | 2000 | 0.2787 | 0.9036 | 0.9035 |
| 0.1685 | 23.91 | 2200 | 0.2918 | 0.9007 | 0.9008 |
| 0.1646 | 26.09 | 2400 | 0.2784 | 0.9068 | 0.9069 |
| 0.1569 | 28.26 | 2600 | 0.2988 | 0.9047 | 0.9049 |
| 0.1472 | 30.43 | 2800 | 0.2988 | 0.8915 | 0.8912 |
| 0.144 | 32.61 | 3000 | 0.3173 | 0.9027 | 0.9028 |
| 0.1345 | 34.78 | 3200 | 0.3016 | 0.8959 | 0.8960 |
| 0.1315 | 36.96 | 3400 | 0.3170 | 0.8967 | 0.8966 |
| 0.1257 | 39.13 | 3600 | 0.3426 | 0.8923 | 0.8925 |
| 0.1193 | 41.3 | 3800 | 0.3451 | 0.8930 | 0.8932 |
| 0.1119 | 43.48 | 4000 | 0.3724 | 0.8905 | 0.8905 |
| 0.1104 | 45.65 | 4200 | 0.3722 | 0.8902 | 0.8905 |
| 0.1027 | 47.83 | 4400 | 0.3907 | 0.8893 | 0.8891 |
| 0.103 | 50.0 | 4600 | 0.3820 | 0.8987 | 0.8987 |
| 0.0957 | 52.17 | 4800 | 0.4251 | 0.8914 | 0.8912 |
| 0.0948 | 54.35 | 5000 | 0.4199 | 0.8921 | 0.8919 |
| 0.0901 | 56.52 | 5200 | 0.4169 | 0.8915 | 0.8912 |
| 0.0871 | 58.7 | 5400 | 0.4306 | 0.8877 | 0.8877 |
| 0.082 | 60.87 | 5600 | 0.4256 | 0.8883 | 0.8884 |
| 0.0821 | 63.04 | 5800 | 0.4689 | 0.8886 | 0.8884 |
| 0.0747 | 65.22 | 6000 | 0.4801 | 0.8958 | 0.8960 |
| 0.0778 | 67.39 | 6200 | 0.4491 | 0.8927 | 0.8925 |
| 0.0709 | 69.57 | 6400 | 0.4827 | 0.8866 | 0.8864 |
| 0.073 | 71.74 | 6600 | 0.4888 | 0.8871 | 0.8871 |
| 0.0674 | 73.91 | 6800 | 0.5019 | 0.8892 | 0.8891 |
| 0.0655 | 76.09 | 7000 | 0.5133 | 0.8907 | 0.8905 |
| 0.0675 | 78.26 | 7200 | 0.4999 | 0.8883 | 0.8884 |
| 0.0646 | 80.43 | 7400 | 0.5117 | 0.8893 | 0.8891 |
| 0.0635 | 82.61 | 7600 | 0.5107 | 0.8898 | 0.8898 |
| 0.0592 | 84.78 | 7800 | 0.5339 | 0.8906 | 0.8905 |
| 0.0566 | 86.96 | 8000 | 0.5493 | 0.8879 | 0.8877 |
| 0.0602 | 89.13 | 8200 | 0.5342 | 0.8831 | 0.8830 |
| 0.0592 | 91.3 | 8400 | 0.5491 | 0.8912 | 0.8912 |
| 0.0539 | 93.48 | 8600 | 0.5585 | 0.8884 | 0.8884 |
| 0.0559 | 95.65 | 8800 | 0.5411 | 0.8919 | 0.8919 |
| 0.0534 | 97.83 | 9000 | 0.5574 | 0.8906 | 0.8905 |
| 0.0547 | 100.0 | 9200 | 0.5596 | 0.8865 | 0.8864 |
| 0.0502 | 102.17 | 9400 | 0.5609 | 0.8885 | 0.8884 |
| 0.0563 | 104.35 | 9600 | 0.5453 | 0.8871 | 0.8871 |
| 0.0527 | 106.52 | 9800 | 0.5504 | 0.8884 | 0.8884 |
| 0.052 | 108.7 | 10000 | 0.5529 | 0.8871 | 0.8871 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T21:34:30+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2522
- F1 Score: 0.9096
- Accuracy: 0.9097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3211 | 2.17 | 200 | 0.2793 | 0.8947 | 0.8946 |
| 0.2662 | 4.35 | 400 | 0.2867 | 0.8956 | 0.8953 |
| 0.2559 | 6.52 | 600 | 0.2689 | 0.8934 | 0.8932 |
| 0.2549 | 8.7 | 800 | 0.2676 | 0.8982 | 0.8980 |
| 0.244 | 10.87 | 1000 | 0.2582 | 0.9012 | 0.9014 |
| 0.2409 | 13.04 | 1200 | 0.2555 | 0.9007 | 0.9008 |
| 0.2348 | 15.22 | 1400 | 0.2508 | 0.9055 | 0.9055 |
| 0.2298 | 17.39 | 1600 | 0.2531 | 0.9077 | 0.9076 |
| 0.2269 | 19.57 | 1800 | 0.2567 | 0.8997 | 0.9001 |
| 0.222 | 21.74 | 2000 | 0.2597 | 0.9022 | 0.9021 |
| 0.2159 | 23.91 | 2200 | 0.2554 | 0.9055 | 0.9055 |
| 0.2145 | 26.09 | 2400 | 0.2550 | 0.9077 | 0.9076 |
| 0.2127 | 28.26 | 2600 | 0.2576 | 0.9047 | 0.9049 |
| 0.2094 | 30.43 | 2800 | 0.2528 | 0.9069 | 0.9069 |
| 0.2051 | 32.61 | 3000 | 0.2605 | 0.9046 | 0.9049 |
| 0.2007 | 34.78 | 3200 | 0.2592 | 0.9067 | 0.9069 |
| 0.2018 | 36.96 | 3400 | 0.2576 | 0.9074 | 0.9076 |
| 0.198 | 39.13 | 3600 | 0.2567 | 0.9060 | 0.9062 |
| 0.1945 | 41.3 | 3800 | 0.2638 | 0.9031 | 0.9035 |
| 0.1894 | 43.48 | 4000 | 0.2697 | 0.9032 | 0.9035 |
| 0.1971 | 45.65 | 4200 | 0.2644 | 0.9066 | 0.9069 |
| 0.1878 | 47.83 | 4400 | 0.2695 | 0.9060 | 0.9062 |
| 0.1864 | 50.0 | 4600 | 0.2698 | 0.9025 | 0.9028 |
| 0.1834 | 52.17 | 4800 | 0.2733 | 0.9026 | 0.9028 |
| 0.1849 | 54.35 | 5000 | 0.2687 | 0.9068 | 0.9069 |
| 0.1794 | 56.52 | 5200 | 0.2728 | 0.9049 | 0.9049 |
| 0.1778 | 58.7 | 5400 | 0.2762 | 0.9039 | 0.9042 |
| 0.174 | 60.87 | 5600 | 0.2727 | 0.9034 | 0.9035 |
| 0.1764 | 63.04 | 5800 | 0.2764 | 0.9028 | 0.9028 |
| 0.1712 | 65.22 | 6000 | 0.2843 | 0.9005 | 0.9008 |
| 0.1732 | 67.39 | 6200 | 0.2781 | 0.9021 | 0.9021 |
| 0.1687 | 69.57 | 6400 | 0.2778 | 0.9041 | 0.9042 |
| 0.1709 | 71.74 | 6600 | 0.2827 | 0.9048 | 0.9049 |
| 0.1661 | 73.91 | 6800 | 0.2840 | 0.9013 | 0.9014 |
| 0.1641 | 76.09 | 7000 | 0.2825 | 0.9028 | 0.9028 |
| 0.1663 | 78.26 | 7200 | 0.2867 | 0.8986 | 0.8987 |
| 0.162 | 80.43 | 7400 | 0.2853 | 0.9013 | 0.9014 |
| 0.1624 | 82.61 | 7600 | 0.2917 | 0.8957 | 0.8960 |
| 0.1628 | 84.78 | 7800 | 0.2895 | 0.8986 | 0.8987 |
| 0.161 | 86.96 | 8000 | 0.2899 | 0.8965 | 0.8966 |
| 0.1611 | 89.13 | 8200 | 0.2888 | 0.8972 | 0.8973 |
| 0.1597 | 91.3 | 8400 | 0.2939 | 0.8965 | 0.8966 |
| 0.1551 | 93.48 | 8600 | 0.3008 | 0.8943 | 0.8946 |
| 0.1581 | 95.65 | 8800 | 0.2983 | 0.8937 | 0.8939 |
| 0.156 | 97.83 | 9000 | 0.2947 | 0.8965 | 0.8966 |
| 0.1558 | 100.0 | 9200 | 0.2942 | 0.8965 | 0.8966 |
| 0.1559 | 102.17 | 9400 | 0.2962 | 0.8958 | 0.8960 |
| 0.1571 | 104.35 | 9600 | 0.2950 | 0.8958 | 0.8960 |
| 0.1553 | 106.52 | 9800 | 0.2972 | 0.8972 | 0.8973 |
| 0.1522 | 108.7 | 10000 | 0.2964 | 0.8965 | 0.8966 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T21:35:04+00:00 |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/sdxl10ArienmixxlAsian_v45Pruned | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| null | 2024-04-26T21:35:56+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-turkish-300m-8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2539
- Wer: 0.1949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 3.4377 | 0.1724 | 500 | 0.7853 | 0.6509 |
| 0.7415 | 0.3447 | 1000 | 0.4365 | 0.4855 |
| 0.472 | 0.5171 | 1500 | 0.3851 | 0.4410 |
| 0.3678 | 0.6894 | 2000 | 0.3468 | 0.4292 |
| 0.3512 | 0.8618 | 2500 | 0.3287 | 0.4139 |
| 0.3345 | 1.0341 | 3000 | 0.3030 | 0.3810 |
| 0.2976 | 1.2065 | 3500 | 0.3085 | 0.3702 |
| 0.2841 | 1.3788 | 4000 | 0.3024 | 0.3964 |
| 0.2674 | 1.5512 | 4500 | 0.2864 | 0.3471 |
| 0.2693 | 1.7235 | 5000 | 0.2664 | 0.3411 |
| 0.2564 | 1.8959 | 5500 | 0.2700 | 0.3399 |
| 0.2407 | 2.0683 | 6000 | 0.2649 | 0.3284 |
| 0.2225 | 2.2406 | 6500 | 0.2619 | 0.3243 |
| 0.2209 | 2.4130 | 7000 | 0.2634 | 0.3154 |
| 0.2221 | 2.5853 | 7500 | 0.2700 | 0.3250 |
| 0.2104 | 2.7577 | 8000 | 0.2576 | 0.3115 |
| 0.2095 | 2.9300 | 8500 | 0.2522 | 0.3123 |
| 0.2031 | 3.1024 | 9000 | 0.2453 | 0.2954 |
| 0.1849 | 3.2747 | 9500 | 0.2483 | 0.2949 |
| 0.1911 | 3.4471 | 10000 | 0.2454 | 0.2984 |
| 0.1784 | 3.6194 | 10500 | 0.2619 | 0.2956 |
| 0.1891 | 3.7918 | 11000 | 0.2520 | 0.2870 |
| 0.1822 | 3.9642 | 11500 | 0.2456 | 0.2945 |
| 0.1633 | 4.1365 | 12000 | 0.2473 | 0.2905 |
| 0.1594 | 4.3089 | 12500 | 0.2413 | 0.2863 |
| 0.1616 | 4.4812 | 13000 | 0.2499 | 0.2852 |
| 0.1633 | 4.6536 | 13500 | 0.2414 | 0.2844 |
| 0.1652 | 4.8259 | 14000 | 0.2330 | 0.2894 |
| 0.1659 | 4.9983 | 14500 | 0.2339 | 0.2703 |
| 0.1496 | 5.1706 | 15000 | 0.2405 | 0.2832 |
| 0.1468 | 5.3430 | 15500 | 0.2378 | 0.2731 |
| 0.1435 | 5.5153 | 16000 | 0.2328 | 0.2679 |
| 0.1386 | 5.6877 | 16500 | 0.2332 | 0.2715 |
| 0.1422 | 5.8600 | 17000 | 0.2328 | 0.2683 |
| 0.1429 | 6.0324 | 17500 | 0.2500 | 0.2715 |
| 0.1271 | 6.2048 | 18000 | 0.2447 | 0.2635 |
| 0.1374 | 6.3771 | 18500 | 0.2412 | 0.2679 |
| 0.1306 | 6.5495 | 19000 | 0.2403 | 0.2604 |
| 0.1287 | 6.7218 | 19500 | 0.2319 | 0.2541 |
| 0.131 | 6.8942 | 20000 | 0.2407 | 0.2600 |
| 0.1261 | 7.0665 | 20500 | 0.2335 | 0.2547 |
| 0.1202 | 7.2389 | 21000 | 0.2321 | 0.2509 |
| 0.1194 | 7.4112 | 21500 | 0.2380 | 0.2546 |
| 0.1216 | 7.5836 | 22000 | 0.2515 | 0.2560 |
| 0.1139 | 7.7559 | 22500 | 0.2295 | 0.2502 |
| 0.1159 | 7.9283 | 23000 | 0.2291 | 0.2529 |
| 0.1145 | 8.1007 | 23500 | 0.2471 | 0.2507 |
| 0.1072 | 8.2730 | 24000 | 0.2327 | 0.2456 |
| 0.1106 | 8.4454 | 24500 | 0.2243 | 0.2461 |
| 0.1069 | 8.6177 | 25000 | 0.2305 | 0.2456 |
| 0.1116 | 8.7901 | 25500 | 0.2397 | 0.2486 |
| 0.1079 | 8.9624 | 26000 | 0.2417 | 0.2528 |
| 0.094 | 9.1348 | 26500 | 0.2484 | 0.2442 |
| 0.0954 | 9.3071 | 27000 | 0.2385 | 0.2477 |
| 0.0981 | 9.4795 | 27500 | 0.2526 | 0.2516 |
| 0.1037 | 9.6518 | 28000 | 0.2346 | 0.2391 |
| 0.0934 | 9.8242 | 28500 | 0.2342 | 0.2414 |
| 0.0968 | 9.9966 | 29000 | 0.2385 | 0.2387 |
| 0.0954 | 10.1689 | 29500 | 0.2367 | 0.2389 |
| 0.0903 | 10.3413 | 30000 | 0.2346 | 0.2365 |
| 0.0931 | 10.5136 | 30500 | 0.2472 | 0.2385 |
| 0.0911 | 10.6860 | 31000 | 0.2562 | 0.2368 |
| 0.0902 | 10.8583 | 31500 | 0.2375 | 0.2390 |
| 0.0831 | 11.0307 | 32000 | 0.2265 | 0.2326 |
| 0.0822 | 11.2030 | 32500 | 0.2464 | 0.2305 |
| 0.083 | 11.3754 | 33000 | 0.2361 | 0.2299 |
| 0.0802 | 11.5477 | 33500 | 0.2440 | 0.2389 |
| 0.0757 | 11.7201 | 34000 | 0.2435 | 0.2261 |
| 0.0781 | 11.8925 | 34500 | 0.2410 | 0.2293 |
| 0.0823 | 12.0648 | 35000 | 0.2551 | 0.2423 |
| 0.0748 | 12.2372 | 35500 | 0.2448 | 0.2245 |
| 0.0724 | 12.4095 | 36000 | 0.2369 | 0.2208 |
| 0.0716 | 12.5819 | 36500 | 0.2462 | 0.2280 |
| 0.0734 | 12.7542 | 37000 | 0.2407 | 0.2255 |
| 0.0771 | 12.9266 | 37500 | 0.2461 | 0.2304 |
| 0.0715 | 13.0989 | 38000 | 0.2496 | 0.2237 |
| 0.0702 | 13.2713 | 38500 | 0.2515 | 0.2228 |
| 0.0697 | 13.4436 | 39000 | 0.2377 | 0.2217 |
| 0.0712 | 13.6160 | 39500 | 0.2446 | 0.2182 |
| 0.0641 | 13.7883 | 40000 | 0.2461 | 0.2187 |
| 0.0712 | 13.9607 | 40500 | 0.2534 | 0.2155 |
| 0.0644 | 14.1331 | 41000 | 0.2428 | 0.2140 |
| 0.0584 | 14.3054 | 41500 | 0.2595 | 0.2156 |
| 0.0621 | 14.4778 | 42000 | 0.2474 | 0.2139 |
| 0.0634 | 14.6501 | 42500 | 0.2571 | 0.2184 |
| 0.0643 | 14.8225 | 43000 | 0.2556 | 0.2180 |
| 0.0599 | 14.9948 | 43500 | 0.2532 | 0.2160 |
| 0.06 | 15.1672 | 44000 | 0.2468 | 0.2182 |
| 0.0555 | 15.3395 | 44500 | 0.2530 | 0.2152 |
| 0.0542 | 15.5119 | 45000 | 0.2530 | 0.2080 |
| 0.0533 | 15.6842 | 45500 | 0.2414 | 0.2111 |
| 0.0587 | 15.8566 | 46000 | 0.2457 | 0.2081 |
| 0.0556 | 16.0290 | 46500 | 0.2509 | 0.2085 |
| 0.0538 | 16.2013 | 47000 | 0.2500 | 0.2067 |
| 0.052 | 16.3737 | 47500 | 0.2472 | 0.2076 |
| 0.0504 | 16.5460 | 48000 | 0.2537 | 0.2080 |
| 0.0562 | 16.7184 | 48500 | 0.2512 | 0.2047 |
| 0.0487 | 16.8907 | 49000 | 0.2604 | 0.2058 |
| 0.0526 | 17.0631 | 49500 | 0.2530 | 0.2064 |
| 0.0457 | 17.2354 | 50000 | 0.2531 | 0.2034 |
| 0.0483 | 17.4078 | 50500 | 0.2532 | 0.2032 |
| 0.0456 | 17.5801 | 51000 | 0.2585 | 0.2040 |
| 0.0507 | 17.7525 | 51500 | 0.2550 | 0.2025 |
| 0.0471 | 17.9249 | 52000 | 0.2439 | 0.2003 |
| 0.0485 | 18.0972 | 52500 | 0.2517 | 0.1989 |
| 0.0472 | 18.2696 | 53000 | 0.2540 | 0.2007 |
| 0.0472 | 18.4419 | 53500 | 0.2595 | 0.2016 |
| 0.0464 | 18.6143 | 54000 | 0.2491 | 0.1987 |
| 0.0436 | 18.7866 | 54500 | 0.2581 | 0.1988 |
| 0.0443 | 18.9590 | 55000 | 0.2530 | 0.1978 |
| 0.0454 | 19.1313 | 55500 | 0.2525 | 0.1967 |
| 0.039 | 19.3037 | 56000 | 0.2537 | 0.1956 |
| 0.0432 | 19.4760 | 56500 | 0.2571 | 0.1975 |
| 0.0431 | 19.6484 | 57000 | 0.2543 | 0.1964 |
| 0.0449 | 19.8208 | 57500 | 0.2543 | 0.1950 |
| 0.0407 | 19.9931 | 58000 | 0.2539 | 0.1949 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_17_0"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-turkish-300m-8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_17_0", "type": "common_voice_17_0", "config": "tr", "split": "test", "args": "tr"}, "metrics": [{"type": "wer", "value": 0.19493994377715307, "name": "Wer"}]}]}]} | tgrhn/wav2vec2-turkish-300m-8 | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:38:23+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-fa
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8445
- Wer: 91.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.8399 | 1.3158 | 25 | 1.9075 | 88.8199 |
| 0.5744 | 2.6316 | 50 | 1.8445 | 91.9255 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "whisper-small-fa", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_11_0", "type": "common_voice_11_0", "config": "fa", "split": "None", "args": "fa"}, "metrics": [{"type": "wer", "value": 91.92546583850931, "name": "Wer"}]}]}]} | MohammadPourbahram/whisper-small-fa | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:38:55+00:00 |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
---
models:
- model: NousResearch/Meta-Llama-3-8B
parameters:
weight: 0.5
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.5
merge_method: task_arithmetic
base_model: NousResearch/Meta-Llama-3-8B
dtype: bfloat16
tokenizer_source: union
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Meta-Llama-3-8B", "NousResearch/Meta-Llama-3-8B-Instruct"]} | kotyKD/Llama-3-Base-Instruct-variation1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:39:07+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2", "results": []}]} | ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:39:34+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** yeilho
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | yeilho/llama-3-8b-Instruct-bnb-4bit-medical | null | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:41:44+00:00 |
null | null | {"license": "lgpl-3.0"} | itsprofarul/model | null | [
"license:lgpl-3.0",
"region:us"
]
| null | 2024-04-26T21:42:11+00:00 |
|
null | null | {"license": "openrail"} | victorluan/stu2x | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-26T21:43:20+00:00 |
|
text-generation | transformers | * <span style="color:orange">I'm just tinkering. All credit to the original creator: [Undi](https://huggingface.co/Undi95).</span>
* <span style="color:orange">"rpcal" designates that this model was quantized using an [RP-specific data set](https://huggingface.co/datasets/royallab/PIPPA-cleaned) instead of the generalized wiki or llama data set. This is likely the last model I will create with this method as Llama-3-8B seems to get markedly dumber by doing it this way. In previous models, it was difficult to tell, but the margin of error increase from quantizing Llama-3-8B makes it obvious which method is better. I deleted the lower quants of rpcal because they are pretty dumb by comparison. This one seems to work fine, and is the only one I would recommend if you want to compare with the other, yourself. </span>
* <span style="color:orange">This model: EXL2 @ 8.0 bpw using RP data for calibration.</span>
---
# LewdPlay-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The new EVOLVE merge method was used (on MMLU specifically), see below for more information!
Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side.
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base.
### Models Merged
The following models were included in the merge:
* ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
* ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
dtype: bfloat16
merge_method: dare_ties
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 4]
model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
parameters:
density: 1.0
weight: 0.6861808716092435
- layer_range: [0, 4]
model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
parameters:
density: 0.6628290134113985
weight: 0.5815923052193855
- layer_range: [0, 4]
model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
parameters:
density: 1.0
weight: 0.5113886163963061
- sources:
- layer_range: [4, 8]
model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
parameters:
density: 0.892655547455918
weight: 0.038732602391021484
- layer_range: [4, 8]
model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
parameters:
density: 1.0
weight: 0.1982145486303527
- layer_range: [4, 8]
model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
parameters:
density: 1.0
weight: 0.6843011350690802
- sources:
- layer_range: [8, 12]
model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
parameters:
density: 0.7817511027396784
weight: 0.13053333213489704
- layer_range: [8, 12]
model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
parameters:
density: 0.6963703515864826
weight: 0.20525481492667985
- layer_range: [8, 12]
model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
parameters:
density: 0.6983086326765777
weight: 0.5843953969574106
- sources:
- layer_range: [12, 16]
model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
parameters:
density: 0.9632895768462915
weight: 0.2101146706607748
- layer_range: [12, 16]
model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
parameters:
density: 0.597557434542081
weight: 0.6728172621848589
- layer_range: [12, 16]
model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
parameters:
density: 0.756263557607837
weight: 0.2581423726361908
- sources:
- layer_range: [16, 20]
model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
parameters:
density: 1.0
weight: 0.2116035543552448
- layer_range: [16, 20]
model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
parameters:
density: 1.0
weight: 0.22654226422958418
- layer_range: [16, 20]
model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
parameters:
density: 0.8925914810507647
weight: 0.42243766315440867
- sources:
- layer_range: [20, 24]
model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
parameters:
density: 0.7697608089825734
weight: 0.1535118632140203
- layer_range: [20, 24]
model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
parameters:
density: 0.9886758076773643
weight: 0.3305040603868546
- layer_range: [20, 24]
model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
parameters:
density: 1.0
weight: 0.40670083428654535
- sources:
- layer_range: [24, 28]
model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
parameters:
density: 1.0
weight: 0.4542810478500622
- layer_range: [24, 28]
model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
parameters:
density: 0.8330662483310117
weight: 0.2587495367324508
- layer_range: [24, 28]
model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
parameters:
density: 0.9845313983551542
weight: 0.40378452705975915
- sources:
- layer_range: [28, 32]
model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
parameters:
density: 1.0
weight: 0.2951962192288415
- layer_range: [28, 32]
model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
parameters:
density: 0.960315594933433
weight: 0.13142971773782525
- layer_range: [28, 32]
model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
parameters:
density: 1.0
weight: 0.30838472094518804
```
## Support
If you want to support me, you can [here](https://ko-fi.com/undiai). | {"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw", "merge"], "base_model": ["vicgalle/Roleplay-Llama-3-8B", "Undi95/Llama-3-Unholy-8B-e4", "Undi95/Llama-3-LewdPlay-8B"]} | zaq-hack/Llama-3-LewdPlay-8B-evo-bpw800-h8-exl2-rpcal | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:vicgalle/Roleplay-Llama-3-8B",
"base_model:Undi95/Llama-3-Unholy-8B-e4",
"base_model:Undi95/Llama-3-LewdPlay-8B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
]
| null | 2024-04-26T21:43:23+00:00 |
null | null | {} | kotyKD/Llama-3-Base-Instruct-variation2 | null | [
"region:us"
]
| null | 2024-04-26T21:43:34+00:00 |
|
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/openbmb/Eurux-8x22b-nca
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ1_S.gguf) | i1-IQ1_S | 29.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ1_M.gguf) | i1-IQ1_M | 32.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ2_XS.gguf) | i1-IQ2_XS | 42.1 | |
| [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ2_S.gguf) | i1-IQ2_S | 42.7 | |
| [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ2_M.gguf) | i1-IQ2_M | 46.8 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 52.2 | IQ3_XXS probably better |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 55.0 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 58.3 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 61.6 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 61.6 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 64.6 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 67.9 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 72.7 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 75.6 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 80.0 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 80.6 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 85.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 97.1 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_M.gguf.part3of3) | i1-Q5_K_M | 100.1 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 115.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["reasoning", "preference_learning", "nca"], "datasets": ["openbmb/UltraInteract_sft", "openbmb/UltraInteract_pair", "openbmb/UltraFeedback"], "base_model": "openbmb/Eurux-8x22b-nca", "quantized_by": "mradermacher"} | mradermacher/Eurux-8x22b-nca-i1-GGUF | null | [
"transformers",
"gguf",
"reasoning",
"preference_learning",
"nca",
"en",
"dataset:openbmb/UltraInteract_sft",
"dataset:openbmb/UltraInteract_pair",
"dataset:openbmb/UltraFeedback",
"base_model:openbmb/Eurux-8x22b-nca",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:44:43+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-lima
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the GAIR/lima dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2578 | 1.0 | 6 | 2.3195 |
| 2.1177 | 2.0 | 12 | 2.1448 |
| 2.0262 | 3.0 | 18 | 2.1417 |
| 1.9422 | 4.0 | 24 | 2.2227 |
| 1.7786 | 5.0 | 30 | 2.3327 |
| 1.7224 | 6.0 | 36 | 2.4202 |
| 1.684 | 7.0 | 42 | 2.4698 |
| 1.6434 | 8.0 | 48 | 2.4961 |
| 1.616 | 9.0 | 54 | 2.5094 |
| 1.6183 | 10.0 | 60 | 2.5096 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["GAIR/lima"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi2-lima", "results": []}]} | pkarypis/phi2-lima | null | [
"transformers",
"tensorboard",
"safetensors",
"phi",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"custom_code",
"dataset:GAIR/lima",
"base_model:microsoft/phi-2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:45:12+00:00 |
text-generation | transformers | {"license": "apache-2.0"} | mukunds15/gemma_1 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:47:12+00:00 |
|
null | null | {} | Aitrepreneur/package | null | [
"region:us"
]
| null | 2024-04-26T21:47:37+00:00 |
|
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Laz4rz/hf-huggy-1-bonus
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | Laz4rz/hf-huggy-1-bonus | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| null | 2024-04-26T21:48:17+00:00 |
null | null | {"license": "apache-2.0"} | amirabdullah19852020/interpreting_reward_models | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-04-26T21:48:46+00:00 |
|
text-generation | transformers |
I am really enjoying this version of Cinder. More information coming. As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses.
Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.

## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. | {"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "datasets": ["Josephgflowers/just_cinder"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"text": "<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>\n"}]} | Josephgflowers/Phi-3-mini-4k-instruct-Cinder-llamafied-with-16bit-GGUF | null | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"nlp",
"code",
"conversational",
"en",
"dataset:Josephgflowers/just_cinder",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T21:50:02+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-70B-v1.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-nd-4.0", "library_name": "transformers", "tags": ["llama-3-70b", "sft", "medical"], "base_model": "johnsnowlabs/JSL-MedLlama-3-70B-v1.0", "quantized_by": "mradermacher"} | mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF | null | [
"transformers",
"gguf",
"llama-3-70b",
"sft",
"medical",
"en",
"base_model:johnsnowlabs/JSL-MedLlama-3-70B-v1.0",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:50:24+00:00 |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Quiet-Mistral - GGUF
- Model creator: https://huggingface.co/Crystalcareai/
- Original model: https://huggingface.co/Crystalcareai/Quiet-Mistral/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Quiet-Mistral.Q2_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q2_K.gguf) | Q2_K | 2.53GB |
| [Quiet-Mistral.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Quiet-Mistral.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Quiet-Mistral.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Quiet-Mistral.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Quiet-Mistral.Q3_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q3_K.gguf) | Q3_K | 3.28GB |
| [Quiet-Mistral.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Quiet-Mistral.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Quiet-Mistral.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Quiet-Mistral.Q4_0.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Quiet-Mistral.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Quiet-Mistral.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Quiet-Mistral.Q4_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_K.gguf) | Q4_K | 4.07GB |
| [Quiet-Mistral.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Quiet-Mistral.Q4_1.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Quiet-Mistral.Q5_0.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Quiet-Mistral.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Quiet-Mistral.Q5_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_K.gguf) | Q5_K | 4.78GB |
| [Quiet-Mistral.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Quiet-Mistral.Q5_1.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Quiet-Mistral.Q6_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
~~Mistral 7b v0.2 with attention_dropout=0.6, for training purposes~~
Conversion process:
1. Download original weights from https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar
2. Convert with https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/convert_mistral_weights_to_hf.py
3. You may need to copy the tokenizer.model from Mistral-7B-Instruct-v0.2 repo.
| {} | RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf | null | [
"gguf",
"region:us"
]
| null | 2024-04-26T21:50:43+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2558
- F1 Score: 0.9111
- Accuracy: 0.9110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3011 | 2.17 | 200 | 0.2643 | 0.8981 | 0.8980 |
| 0.2454 | 4.35 | 400 | 0.2584 | 0.8979 | 0.8980 |
| 0.2237 | 6.52 | 600 | 0.2645 | 0.9016 | 0.9014 |
| 0.2107 | 8.7 | 800 | 0.2743 | 0.8915 | 0.8912 |
| 0.1936 | 10.87 | 1000 | 0.2737 | 0.8958 | 0.8960 |
| 0.181 | 13.04 | 1200 | 0.2963 | 0.8827 | 0.8823 |
| 0.1593 | 15.22 | 1400 | 0.3184 | 0.8908 | 0.8905 |
| 0.1453 | 17.39 | 1600 | 0.3405 | 0.8839 | 0.8836 |
| 0.1285 | 19.57 | 1800 | 0.3479 | 0.8939 | 0.8939 |
| 0.1111 | 21.74 | 2000 | 0.4011 | 0.8771 | 0.8768 |
| 0.1005 | 23.91 | 2200 | 0.4055 | 0.8819 | 0.8816 |
| 0.0903 | 26.09 | 2400 | 0.4202 | 0.8913 | 0.8912 |
| 0.0782 | 28.26 | 2600 | 0.4638 | 0.8853 | 0.8850 |
| 0.0666 | 30.43 | 2800 | 0.4875 | 0.8773 | 0.8768 |
| 0.063 | 32.61 | 3000 | 0.5041 | 0.8791 | 0.8789 |
| 0.0549 | 34.78 | 3200 | 0.4648 | 0.8886 | 0.8884 |
| 0.0479 | 36.96 | 3400 | 0.5217 | 0.8907 | 0.8905 |
| 0.0426 | 39.13 | 3600 | 0.6087 | 0.8800 | 0.8802 |
| 0.0398 | 41.3 | 3800 | 0.5759 | 0.8764 | 0.8761 |
| 0.0347 | 43.48 | 4000 | 0.6083 | 0.8818 | 0.8816 |
| 0.0293 | 45.65 | 4200 | 0.6258 | 0.8877 | 0.8877 |
| 0.0259 | 47.83 | 4400 | 0.7382 | 0.8804 | 0.8802 |
| 0.0279 | 50.0 | 4600 | 0.6818 | 0.8866 | 0.8864 |
| 0.0255 | 52.17 | 4800 | 0.6983 | 0.8873 | 0.8871 |
| 0.0221 | 54.35 | 5000 | 0.7424 | 0.8886 | 0.8884 |
| 0.0243 | 56.52 | 5200 | 0.6928 | 0.8826 | 0.8823 |
| 0.0181 | 58.7 | 5400 | 0.7622 | 0.8814 | 0.8816 |
| 0.0172 | 60.87 | 5600 | 0.7647 | 0.8856 | 0.8857 |
| 0.0187 | 63.04 | 5800 | 0.7383 | 0.8818 | 0.8816 |
| 0.0152 | 65.22 | 6000 | 0.7824 | 0.8879 | 0.8877 |
| 0.0144 | 67.39 | 6200 | 0.8176 | 0.8908 | 0.8905 |
| 0.0144 | 69.57 | 6400 | 0.7774 | 0.8872 | 0.8871 |
| 0.0133 | 71.74 | 6600 | 0.8605 | 0.8885 | 0.8884 |
| 0.0127 | 73.91 | 6800 | 0.8442 | 0.8865 | 0.8864 |
| 0.0128 | 76.09 | 7000 | 0.8120 | 0.8866 | 0.8864 |
| 0.0108 | 78.26 | 7200 | 0.8403 | 0.8839 | 0.8836 |
| 0.0109 | 80.43 | 7400 | 0.8822 | 0.8873 | 0.8871 |
| 0.0086 | 82.61 | 7600 | 0.8667 | 0.8878 | 0.8877 |
| 0.0099 | 84.78 | 7800 | 0.8767 | 0.8858 | 0.8857 |
| 0.0086 | 86.96 | 8000 | 0.9134 | 0.8872 | 0.8871 |
| 0.01 | 89.13 | 8200 | 0.9166 | 0.8891 | 0.8891 |
| 0.0078 | 91.3 | 8400 | 0.9330 | 0.8934 | 0.8932 |
| 0.0073 | 93.48 | 8600 | 0.9231 | 0.8926 | 0.8925 |
| 0.0078 | 95.65 | 8800 | 0.9328 | 0.8900 | 0.8898 |
| 0.0085 | 97.83 | 9000 | 0.9496 | 0.8881 | 0.8877 |
| 0.0076 | 100.0 | 9200 | 0.9058 | 0.8906 | 0.8905 |
| 0.0062 | 102.17 | 9400 | 0.9272 | 0.8893 | 0.8891 |
| 0.0072 | 104.35 | 9600 | 0.9439 | 0.8846 | 0.8843 |
| 0.0073 | 106.52 | 9800 | 0.9272 | 0.8866 | 0.8864 |
| 0.007 | 108.7 | 10000 | 0.9262 | 0.8873 | 0.8871 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T21:51:44+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llava-1.5-7b-hf-ft-mix-vsft
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-ft-mix-vsft", "results": []}]} | guntinik/llava-1.5-7b-hf-ft-mix-vsft | null | [
"peft",
"tensorboard",
"safetensors",
"llava",
"trl",
"sft",
"generated_from_trainer",
"base_model:llava-hf/llava-1.5-7b-hf",
"4-bit",
"region:us"
]
| null | 2024-04-26T21:52:15+00:00 |
null | null | {"license": "openrail"} | xmo0n/RVC_MODELS | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-26T21:52:23+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adapterhappy2sad-1k-50-0.009 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:53:23+00:00 |
null | null | {} | o2satz/MedMerge_Llama3 | null | [
"gguf",
"region:us"
]
| null | 2024-04-26T21:53:33+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.